Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Stanford HAI co-director Fei-Fei Li says the next frontier in AI lies in advancing spatial intelligence. In this op-ed, she explains how enabling machines to perceive and interact with the world in 3D can unlock human-centered AI applications for robotics, healthcare, education, and beyond.
Stanford HAI co-director Fei-Fei Li says the next frontier in AI lies in advancing spatial intelligence. In this op-ed, she explains how enabling machines to perceive and interact with the world in 3D can unlock human-centered AI applications for robotics, healthcare, education, and beyond.
This year, Stanford HAI celebrated the fifth year since its founding. Looking forward, the institute aims to continue fostering a better understanding of AI's impacts on society.
This year, Stanford HAI celebrated the fifth year since its founding. Looking forward, the institute aims to continue fostering a better understanding of AI's impacts on society.
Rishi Bommasani, Society Lead at HAI's CRFM, discusses where AI is proving most dangerous, why openness is important, and how regulators are thinking about the open-close divide.
Rishi Bommasani, Society Lead at HAI's CRFM, discusses where AI is proving most dangerous, why openness is important, and how regulators are thinking about the open-close divide.
Stanford's RegLab, directed by HAI Senior Fellow Daniel E. Ho, developed an AI model that helped Santa Clara accelerate the process of flagging and mapping restrictive covenants.
Stanford's RegLab, directed by HAI Senior Fellow Daniel E. Ho, developed an AI model that helped Santa Clara accelerate the process of flagging and mapping restrictive covenants.
Dan Ho, HAI Senior Fellow and director of the Stanford RegLab, discusses RegLab's AI model that analyzes decades of property records, helping to identify illegal racially restrictive language in housing documents.
Dan Ho, HAI Senior Fellow and director of the Stanford RegLab, discusses RegLab's AI model that analyzes decades of property records, helping to identify illegal racially restrictive language in housing documents.
Despite huge advancements in machine learning and neural networks, AI systems still depend on human direction. This article references HAI's 2022 conference where attendees were encouraged to rethink AI systems with a “human in the loop” and consider a future where people remain at the center of decision making.
Despite huge advancements in machine learning and neural networks, AI systems still depend on human direction. This article references HAI's 2022 conference where attendees were encouraged to rethink AI systems with a “human in the loop” and consider a future where people remain at the center of decision making.
AI expert Gary Marcus references HAI's study showing that LLM responses to medical questions highly vary and are often inaccurate.
AI expert Gary Marcus references HAI's study showing that LLM responses to medical questions highly vary and are often inaccurate.
Peter Norvig, Distinguished Education Fellow at the Stanford HAI, comments on how limiting the budget at an AI agent’s disposal as well as transaction times and capabilities can help AI agents “operate safely within defined boundaries."
Peter Norvig, Distinguished Education Fellow at the Stanford HAI, comments on how limiting the budget at an AI agent’s disposal as well as transaction times and capabilities can help AI agents “operate safely within defined boundaries."
Fareed Zakaria speaks with “Godmother of AI” Fei-Fei Li about her journey as a computer scientist and how it influenced the discovery of modern AI.
Fareed Zakaria speaks with “Godmother of AI” Fei-Fei Li about her journey as a computer scientist and how it influenced the discovery of modern AI.
This article cites the Stanford HAI AI Index's data relating to copyright infringement in creative works having to do with AI models.
This article cites the Stanford HAI AI Index's data relating to copyright infringement in creative works having to do with AI models.
James Landay, Co-Founder of Stanford HAI, says disinformation, deepfake, discrimination and job displacement; of which not a lot has happened yet, are the real harms of AI.
James Landay, Co-Founder of Stanford HAI, says disinformation, deepfake, discrimination and job displacement; of which not a lot has happened yet, are the real harms of AI.
HAI Co-Director James Landay comments on Senate Bill 1047, which concerns safely testing AI models before rolling out to the public.
HAI Co-Director James Landay comments on Senate Bill 1047, which concerns safely testing AI models before rolling out to the public.
HAI Co-Director Fei-Fei Li is recognized for her commitment to ethical AI and interdisciplinary research, continuing to shape the future of AI development and application.
HAI Co-Director Fei-Fei Li is recognized for her commitment to ethical AI and interdisciplinary research, continuing to shape the future of AI development and application.
This article gives an overview to the sectors and business processes that have been affected by AI tool adoption including healthcare, retail, hiring, and education, citing the AI Index's data report on a surge in fundraising for generative AI companies since 2022.
This article gives an overview to the sectors and business processes that have been affected by AI tool adoption including healthcare, retail, hiring, and education, citing the AI Index's data report on a surge in fundraising for generative AI companies since 2022.
With the release of Meta's Llama 3.1, Director of CRFM and Senior Fellow at Stanford HAI Percy Liang comments on the potential audience shifts that could occur from other commercial AI tools to Llama 3.1.
With the release of Meta's Llama 3.1, Director of CRFM and Senior Fellow at Stanford HAI Percy Liang comments on the potential audience shifts that could occur from other commercial AI tools to Llama 3.1.
CRFM Society Lead Rishi Bommasani comments on the lack of clarity on what has changed in the year since major AI companies adopted the White House's set of eight voluntary commitments on how to develop AI in a safe and trustworthy way.
CRFM Society Lead Rishi Bommasani comments on the lack of clarity on what has changed in the year since major AI companies adopted the White House's set of eight voluntary commitments on how to develop AI in a safe and trustworthy way.
This article discusses the Stanford HAI Congressional Boot Camp on AI and the strategic positioning of universities to help lawmakers understand the power of AI and set guardrails for AI's wide reach.
This article discusses the Stanford HAI Congressional Boot Camp on AI and the strategic positioning of universities to help lawmakers understand the power of AI and set guardrails for AI's wide reach.
Vanessa Parli, HAI Director of Research Programs, explains the importance of evaluation methods when it comes to AI benchmarking, noting the significance of assessing traits like "bias, toxicity, truthfulness, and other responsibility aspects."
Vanessa Parli, HAI Director of Research Programs, explains the importance of evaluation methods when it comes to AI benchmarking, noting the significance of assessing traits like "bias, toxicity, truthfulness, and other responsibility aspects."