Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Privacy, Safety, Security | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

Privacy, Safety, Security

As AI use grows, how can we safeguard privacy, security, and data protection for individuals and organizations?

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test
Andrew Myers
Feb 02, 2026
News
illustration of data and lines

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

News
illustration of data and lines

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test

Andrew Myers
Foundation ModelsGenerative AIPrivacy, Safety, SecurityFeb 02

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

Can Foundation Models Help Us Achieve Perfect Secrecy?
Simran Arora
Apr 01, 2022
Research
Your browser does not support the video tag.

A key promise of machine learning is the ability to assist users with personal tasks.

Research
Your browser does not support the video tag.

Can Foundation Models Help Us Achieve Perfect Secrecy?

Simran Arora
Privacy, Safety, SecurityFoundation ModelsApr 01

A key promise of machine learning is the ability to assist users with personal tasks.

Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee
Jennifer King
Quick ReadNov 18, 2025
Testimony

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

Testimony

Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee

Jennifer King
Privacy, Safety, SecurityQuick ReadNov 18

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

Amy Zegart
Person
Amy Zegart headshot
Person
Amy Zegart headshot

Amy Zegart

Privacy, Safety, SecurityOct 05
Musk's Grok AI Faces More Scrutiny After Generating Sexual Deepfake Images
PBS NewsHour
Jan 16, 2026
Media Mention

Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.

Media Mention
Your browser does not support the video tag.

Musk's Grok AI Faces More Scrutiny After Generating Sexual Deepfake Images

PBS NewsHour
Privacy, Safety, SecurityRegulation, Policy, GovernanceEthics, Equity, InclusionJan 16

Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.

Validating Claims About AI: A Policymaker’s Guide
Olawale Salaudeen, Anka Reuel, Angelina Wang, Sanmi Koyejo
Quick ReadSep 24, 2025
Policy Brief

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

Policy Brief

Validating Claims About AI: A Policymaker’s Guide

Olawale Salaudeen, Anka Reuel, Angelina Wang, Sanmi Koyejo
Foundation ModelsPrivacy, Safety, SecurityQuick ReadSep 24

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

All Work Published on Privacy, Safety, Security

Transparency in AI is on the Decline
Rishi Bommasani, Kevin Klyman, Alexander Wan, Percy Liang
Dec 09, 2025
News
Your browser does not support the video tag.

A new study shows the AI industry is withholding key information.

Transparency in AI is on the Decline

Rishi Bommasani, Kevin Klyman, Alexander Wan, Percy Liang
Dec 09, 2025

A new study shows the AI industry is withholding key information.

Foundation Models
Regulation, Policy, Governance
Privacy, Safety, Security
Your browser does not support the video tag.
News
Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy
Riana Pfefferkorn
Quick ReadJul 21, 2025
Policy Brief

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy

Riana Pfefferkorn
Quick ReadJul 21, 2025

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Privacy, Safety, Security
Education, Skills
Policy Brief
Julian Nyarko
Professor, Stanford Law | Associate Director and Senior Fellow, Stanford HAI | Center Fellow, Stanford Institute for Economic Policy Research
Person
Julian Nyarko headshot

Julian Nyarko

Professor, Stanford Law | Associate Director and Senior Fellow, Stanford HAI | Center Fellow, Stanford Institute for Economic Policy Research
Privacy, Safety, Security
Regulation, Policy, Governance
Julian Nyarko headshot
Person
Why You Can (And Should) Opt Out Of TSA Facial Recognition Right Now
HuffPost
Nov 06, 2025
Media Mention

Jennifer King, Policy Fellow at the Stanford HAI who specializes in privacy, discusses vagueness in the TSA’s public communications about what they are doing with facial recognition data.

Why You Can (And Should) Opt Out Of TSA Facial Recognition Right Now

HuffPost
Nov 06, 2025

Jennifer King, Policy Fellow at the Stanford HAI who specializes in privacy, discusses vagueness in the TSA’s public communications about what they are doing with facial recognition data.

Law Enforcement and Justice
Privacy, Safety, Security
Media Mention
Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act
Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Quick ReadJun 30, 2025
Issue Brief

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act

Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Quick ReadJun 30, 2025

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Regulation, Policy, Governance
Privacy, Safety, Security
Issue Brief
Be Careful What You Tell Your AI Chatbot
Nikki Goth Itoi
Oct 15, 2025
News

A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.

Be Careful What You Tell Your AI Chatbot

Nikki Goth Itoi
Oct 15, 2025

A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.

Privacy, Safety, Security
Generative AI
Regulation, Policy, Governance
News
1
2
3
4
5