Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
As AI use grows, how can we safeguard privacy, security, and data protection for individuals and organizations?

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.
A key promise of machine learning is the ability to assist users with personal tasks.
A key promise of machine learning is the ability to assist users with personal tasks.

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.
Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.
Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.
A new study shows the AI industry is withholding key information.
A new study shows the AI industry is withholding key information.

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.
This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.



Jennifer King, Policy Fellow at the Stanford HAI who specializes in privacy, discusses vagueness in the TSA’s public communications about what they are doing with facial recognition data.
Jennifer King, Policy Fellow at the Stanford HAI who specializes in privacy, discusses vagueness in the TSA’s public communications about what they are doing with facial recognition data.

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.
This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.


A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.
A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.
