Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
As AI use grows, how can we safeguard privacy, security, and data protection for individuals and organizations?
Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

Strategic stability exists when neither side thinks it can improve its strategic outcome by striking first.

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.
A key promise of machine learning is the ability to assist users with personal tasks.
A key promise of machine learning is the ability to assist users with personal tasks.

This brief examines the privacy risks foundation models pose to individuals and society, and governance mechanisms needed to address them.

This brief examines the privacy risks foundation models pose to individuals and society, and governance mechanisms needed to address them.
.jpg&w=256&q=80)
We need to rethink student assessment, AI literacy, and technology’s usefulness, according to experts at the recent AI+Education Summit.
.jpg&w=1920&q=100)
We need to rethink student assessment, AI literacy, and technology’s usefulness, according to experts at the recent AI+Education Summit.

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.
A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.


In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.
In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.



Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.
Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.
This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

A new study shows the AI industry is withholding key information.
A new study shows the AI industry is withholding key information.