Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Privacy, Safety, Security | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

Privacy, Safety, Security

As AI use grows, how can we safeguard privacy, security, and data protection for individuals and organizations?

The Evolution of Safety: Stanford’s Mykel Kochenderfer Explores Responsible AI in High-Stakes Environments
Scott Hadly
May 09, 2025
News

As AI technologies rapidly evolve, Professor Kochenderfer leads the charge in developing effective validation mechanisms to ensure safety in autonomous systems like vehicles and drones.

News

The Evolution of Safety: Stanford’s Mykel Kochenderfer Explores Responsible AI in High-Stakes Environments

Scott Hadly
Privacy, Safety, SecurityMay 09

As AI technologies rapidly evolve, Professor Kochenderfer leads the charge in developing effective validation mechanisms to ensure safety in autonomous systems like vehicles and drones.

Can Foundation Models Help Us Achieve Perfect Secrecy?
Simran Arora, Christopher Ré
Apr 01, 2022
Research
Your browser does not support the video tag.

A key promise of machine learning is the ability to assist users with personal tasks.

Research
Your browser does not support the video tag.

Can Foundation Models Help Us Achieve Perfect Secrecy?

Simran Arora, Christopher Ré
Privacy, Safety, SecurityFoundation ModelsApr 01

A key promise of machine learning is the ability to assist users with personal tasks.

Safeguarding Third-Party AI Research
Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Percy Liang, Peter Henderson
Feb 13, 2025
Policy Brief
Safeguarding third-party AI research

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.

Policy Brief
Safeguarding third-party AI research

Safeguarding Third-Party AI Research

Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Percy Liang, Peter Henderson
Privacy, Safety, SecurityRegulation, Policy, GovernanceFeb 13

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.

Julian Nyarko
Person
Julian Nyarko headshot
Person
Julian Nyarko headshot

Julian Nyarko

Privacy, Safety, SecurityRegulation, Policy, GovernanceOct 05
A Framework to Report AI’s Flaws
Andrew Myers
Apr 28, 2025
News

Pointing to "white-hat" hacking, AI policy experts recommend a new system of third-party reporting and tracking of AI’s flaws.

News

A Framework to Report AI’s Flaws

Andrew Myers
Ethics, Equity, InclusionGenerative AIPrivacy, Safety, SecurityApr 28

Pointing to "white-hat" hacking, AI policy experts recommend a new system of third-party reporting and tracking of AI’s flaws.

What Makes a Good AI Benchmark?
Anka Reuel, Amelia Hardy, Chandler Smith, Max Lamparth, Malcolm Hardy, Mykel Kochenderfer
Dec 11, 2024
Policy Brief
What Makes a Good AI Benchmark

This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.

Policy Brief
What Makes a Good AI Benchmark

What Makes a Good AI Benchmark?

Anka Reuel, Amelia Hardy, Chandler Smith, Max Lamparth, Malcolm Hardy, Mykel Kochenderfer
Foundation ModelsPrivacy, Safety, SecurityDec 11

This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.

All Work Published on Privacy, Safety, Security

23andMe’s DNA Database Is Up For Sale. Who Might Want It, And What For?
Washington Post
Mar 25, 2025
Media Mention

After 23andMe announced that it’s headed to bankruptcy court, it’s unclear what happens to the mass of sensitive genetic data that it holds. Jen King, Policy Fellow at HAI comments on where this data could end up and be used for.

23andMe’s DNA Database Is Up For Sale. Who Might Want It, And What For?

Washington Post
Mar 25, 2025

After 23andMe announced that it’s headed to bankruptcy court, it’s unclear what happens to the mass of sensitive genetic data that it holds. Jen King, Policy Fellow at HAI comments on where this data could end up and be used for.

Privacy, Safety, Security
Industry, Innovation
Ethics, Equity, Inclusion
Media Mention
Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models
Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Sep 09, 2024
Response to Request

In this response to the U.S. AI Safety Institute’s (US AISI) request for comment on its draft guidelines for managing the misuse risk for dual-use foundation models, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), and the Regulation, Evaluation, and Governance Lab (RegLab) urge the US AISI to strengthen its guidance on reproducible evaluations and third- party evaluations, as well as clarify guidance on post-deployment monitoring. They also encourage the institute to develop similar guidance for other actors in the foundation model supply chain and for non-misuse risks, while ensuring the continued open release of foundation models absent evidence of marginal risk.

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models

Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Sep 09, 2024

In this response to the U.S. AI Safety Institute’s (US AISI) request for comment on its draft guidelines for managing the misuse risk for dual-use foundation models, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), and the Regulation, Evaluation, and Governance Lab (RegLab) urge the US AISI to strengthen its guidance on reproducible evaluations and third- party evaluations, as well as clarify guidance on post-deployment monitoring. They also encourage the institute to develop similar guidance for other actors in the foundation model supply chain and for non-misuse risks, while ensuring the continued open release of foundation models absent evidence of marginal risk.

Regulation, Policy, Governance
Foundation Models
Privacy, Safety, Security
Response to Request
Amy Zegart
Morris Arnold and Nona Jean Cox Senior Fellow at the Hoover Institution | Senior Fellow, Freeman Spogli Institute for International Studies | Associate Director and Senior Fellow, Stanford HAI | Professor, by courtesy, of Political Science, Stanford
Person
Amy Zegart headshot

Amy Zegart

Morris Arnold and Nona Jean Cox Senior Fellow at the Hoover Institution | Senior Fellow, Freeman Spogli Institute for International Studies | Associate Director and Senior Fellow, Stanford HAI | Professor, by courtesy, of Political Science, Stanford
Privacy, Safety, Security
Amy Zegart headshot
Person
Signal Isn’t Infallible, Despite Being One Of The Most Secure Encrypted Chat Apps
NBC News
Mar 25, 2025
Media Mention

HAI Policy Fellow Riana Pfefferkorn explains the different types of risk protection the private messaging app Signal can and cannot offer its users.

Signal Isn’t Infallible, Despite Being One Of The Most Secure Encrypted Chat Apps

NBC News
Mar 25, 2025

HAI Policy Fellow Riana Pfefferkorn explains the different types of risk protection the private messaging app Signal can and cannot offer its users.

Privacy, Safety, Security
Media Mention
Response to NTIA’s Request for Comment on Dual Use Open Foundation Models
Researchers from Stanford HAI, CRFM, RegLab, Other Institutions
Mar 27, 2024
Response to Request

In this response to the National Telecommunications and Information Administration’s NTIA) request for comment on dual use foundation AI models with widely available model weights, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), the Regulation, Evaluation, and Governance Lab (RegLab), and other institutions urge policymakers to amplify the benefits of open foundation models while further assessing the extent of their marginal risks.

Response to NTIA’s Request for Comment on Dual Use Open Foundation Models

Researchers from Stanford HAI, CRFM, RegLab, Other Institutions
Mar 27, 2024

In this response to the National Telecommunications and Information Administration’s NTIA) request for comment on dual use foundation AI models with widely available model weights, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), the Regulation, Evaluation, and Governance Lab (RegLab), and other institutions urge policymakers to amplify the benefits of open foundation models while further assessing the extent of their marginal risks.

Foundation Models
Regulation, Policy, Governance
Privacy, Safety, Security
Response to Request
AI Action Summit in Paris Highlights A Shifting Policy Landscape
Shana Lynch
Feb 27, 2025
News

Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.

AI Action Summit in Paris Highlights A Shifting Policy Landscape

Shana Lynch
Feb 27, 2025

Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.

Democracy
Regulation, Policy, Governance
Privacy, Safety, Security
News
1
2
3
4