Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
As AI use grows, how can we safeguard privacy, security, and data protection for individuals and organizations?
A new study shows the AI industry is withholding key information.
A new study shows the AI industry is withholding key information.
A key promise of machine learning is the ability to assist users with personal tasks.
A key promise of machine learning is the ability to assist users with personal tasks.

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.
Jennifer King, Policy Fellow at the Stanford HAI who specializes in privacy, discusses vagueness in the TSA’s public communications about what they are doing with facial recognition data.
Jennifer King, Policy Fellow at the Stanford HAI who specializes in privacy, discusses vagueness in the TSA’s public communications about what they are doing with facial recognition data.

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.
A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.


This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.
This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.



HAI Policy Fellow Riana Pfefferkorn advises on ways in which the United States Congress could move the needle on model safety regarding AI-generated CSAM.
HAI Policy Fellow Riana Pfefferkorn advises on ways in which the United States Congress could move the needle on model safety regarding AI-generated CSAM.

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.
This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

HAI Policy Fellow Riana Pfefferkorn discusses scenarios when third parties might be able to access personal messaging data and how to keep those forms of digital communication private.
HAI Policy Fellow Riana Pfefferkorn discusses scenarios when third parties might be able to access personal messaging data and how to keep those forms of digital communication private.