Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Privacy, Safety, Security | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Back to Privacy, Safety, Security

All Work Published on Privacy, Safety, Security

Safeguarding Third-Party AI Research
Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Percy Liang, Peter Henderson
Quick ReadFeb 13, 2025
Policy Brief
Safeguarding third-party AI research

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.

Safeguarding Third-Party AI Research

Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Percy Liang, Peter Henderson
Quick ReadFeb 13, 2025

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.

Privacy, Safety, Security
Regulation, Policy, Governance
Safeguarding third-party AI research
Policy Brief
Why You Can (And Should) Opt Out Of TSA Facial Recognition Right Now
HuffPost
Nov 06, 2025
Media Mention

Jennifer King, Policy Fellow at the Stanford HAI who specializes in privacy, discusses vagueness in the TSA’s public communications about what they are doing with facial recognition data.

Why You Can (And Should) Opt Out Of TSA Facial Recognition Right Now

HuffPost
Nov 06, 2025

Jennifer King, Policy Fellow at the Stanford HAI who specializes in privacy, discusses vagueness in the TSA’s public communications about what they are doing with facial recognition data.

Law Enforcement and Justice
Privacy, Safety, Security
Media Mention
What Makes a Good AI Benchmark?
Anka Reuel, Amelia Hardy, Chandler Smith, Max Lamparth, Malcolm Hardy, Mykel Kochenderfer
Quick ReadDec 11, 2024
Policy Brief
What Makes a Good AI Benchmark

This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.

What Makes a Good AI Benchmark?

Anka Reuel, Amelia Hardy, Chandler Smith, Max Lamparth, Malcolm Hardy, Mykel Kochenderfer
Quick ReadDec 11, 2024

This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.

Foundation Models
Privacy, Safety, Security
What Makes a Good AI Benchmark
Policy Brief
Be Careful What You Tell Your AI Chatbot
Nikki Goth Itoi
Oct 15, 2025
News

A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.

Be Careful What You Tell Your AI Chatbot

Nikki Goth Itoi
Oct 15, 2025

A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.

Privacy, Safety, Security
Generative AI
Regulation, Policy, Governance
News
Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models
Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Sep 09, 2024
Response to Request

Stanford scholars respond to a federal RFC on the U.S. AI Safety Institute’s draft guidelines for managing the misuse risk for dual-use foundation models.

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models

Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Sep 09, 2024

Stanford scholars respond to a federal RFC on the U.S. AI Safety Institute’s draft guidelines for managing the misuse risk for dual-use foundation models.

Regulation, Policy, Governance
Foundation Models
Privacy, Safety, Security
Response to Request
How Congress Could Stifle The Onslaught Of AI-Generated Child Sexual Abuse Material
Tech Policy Press
Sep 25, 2025
Media Mention

HAI Policy Fellow Riana Pfefferkorn advises on ways in which the United States Congress could move the needle on model safety regarding AI-generated CSAM.


How Congress Could Stifle The Onslaught Of AI-Generated Child Sexual Abuse Material

Tech Policy Press
Sep 25, 2025

HAI Policy Fellow Riana Pfefferkorn advises on ways in which the United States Congress could move the needle on model safety regarding AI-generated CSAM.


Ethics, Equity, Inclusion
Privacy, Safety, Security
Regulation, Policy, Governance
Media Mention
1
2
3
4
5