Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Privacy, Safety, Security | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Back to Privacy, Safety, Security

All Work Published on Privacy, Safety, Security

Safeguarding Third-Party AI Research
Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Percy Liang, Peter Henderson
Quick ReadFeb 13, 2025
Policy Brief
Safeguarding third-party AI research

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.

Safeguarding Third-Party AI Research

Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Percy Liang, Peter Henderson
Quick ReadFeb 13, 2025

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.

Privacy, Safety, Security
Regulation, Policy, Governance
Safeguarding third-party AI research
Policy Brief
How Do We Protect Children in the Age of AI?
Nikki Goth Itoi
Sep 08, 2025
News

Tools that enable teens to create deepfake nude images of each other are compromising child safety, and parents must get involved.

How Do We Protect Children in the Age of AI?

Nikki Goth Itoi
Sep 08, 2025

Tools that enable teens to create deepfake nude images of each other are compromising child safety, and parents must get involved.

Ethics, Equity, Inclusion
Privacy, Safety, Security
News
What Makes a Good AI Benchmark?
Anka Reuel, Amelia Hardy, Chandler Smith, Max Lamparth, Malcolm Hardy, Mykel Kochenderfer
Quick ReadDec 11, 2024
Policy Brief
What Makes a Good AI Benchmark

This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.

What Makes a Good AI Benchmark?

Anka Reuel, Amelia Hardy, Chandler Smith, Max Lamparth, Malcolm Hardy, Mykel Kochenderfer
Quick ReadDec 11, 2024

This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.

Foundation Models
Privacy, Safety, Security
What Makes a Good AI Benchmark
Policy Brief
The Age-Checked Internet Has Arrived
Wired
Jul 25, 2025
Media Mention

Stanford HAI Policy Fellow Riana Pfefferkorn speaks about the implications of laws related to age-checked access to the internet.

The Age-Checked Internet Has Arrived

Wired
Jul 25, 2025

Stanford HAI Policy Fellow Riana Pfefferkorn speaks about the implications of laws related to age-checked access to the internet.

Privacy, Safety, Security
Media Mention
Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models
Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Sep 09, 2024
Response to Request

Stanford scholars respond to a federal RFC on the U.S. AI Safety Institute’s draft guidelines for managing the misuse risk for dual-use foundation models.

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models

Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Sep 09, 2024

Stanford scholars respond to a federal RFC on the U.S. AI Safety Institute’s draft guidelines for managing the misuse risk for dual-use foundation models.

Regulation, Policy, Governance
Foundation Models
Privacy, Safety, Security
Response to Request
Europe's Innovation Pivot: Can the EU Lead the Next Wave of AI?
Daniel Zhang
Jun 04, 2025
News
Speaker for the event

With its AI Continent Action Plan, the EU aims to reinvent its innovation model. European Commission Executive Vice-President for Tech Sovereignty, Security and Democracy Henna Virkkunen outlines its ambition.

Europe's Innovation Pivot: Can the EU Lead the Next Wave of AI?

Daniel Zhang
Jun 04, 2025

With its AI Continent Action Plan, the EU aims to reinvent its innovation model. European Commission Executive Vice-President for Tech Sovereignty, Security and Democracy Henna Virkkunen outlines its ambition.

Government, Public Administration
Privacy, Safety, Security
Speaker for the event
News
1
2
3
4
5