Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Regulation, Policy, Governance | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Back to Regulation, Policy, Governance

All Work Published on Regulation, Policy, Governance

Labeling AI-Generated Content May Not Change Its Persuasiveness
Isabel Gallegos, Dr. Chen Shani, Weiyan Shi, Federico Bianchi, Izzy Benjamin Gainsburg, Dan Jurafsky, Robb Willer
Quick ReadJul 30, 2025
Policy Brief

This brief evaluates the impact of authorship labels on the persuasiveness of AI-written policy messages.

Labeling AI-Generated Content May Not Change Its Persuasiveness

Isabel Gallegos, Dr. Chen Shani, Weiyan Shi, Federico Bianchi, Izzy Benjamin Gainsburg, Dan Jurafsky, Robb Willer
Quick ReadJul 30, 2025

This brief evaluates the impact of authorship labels on the persuasiveness of AI-written policy messages.

Generative AI
Regulation, Policy, Governance
Policy Brief
Our Racist, Terrifying Deepfake Future Is Here
Nature
Nov 03, 2025
Media Mention

“It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI,” says HAI Policy Fellow Riana Pfefferkorn in response to a viral AI-generated deepfake video.

Our Racist, Terrifying Deepfake Future Is Here

Nature
Nov 03, 2025

“It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI,” says HAI Policy Fellow Riana Pfefferkorn in response to a viral AI-generated deepfake video.

Generative AI
Regulation, Policy, Governance
Law Enforcement and Justice
Media Mention
Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act
Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Quick ReadJun 30, 2025
Issue Brief

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act

Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Quick ReadJun 30, 2025

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Regulation, Policy, Governance
Privacy, Safety, Security
Issue Brief
23andMe Clients Navigate Uncertain Future Two Years After Breach
Bloomberg Law
Oct 17, 2025
Media Mention

"The biggest difference between 23andMe and other breaches is that sequenced DNA is 'irreplaceable and immutable,' said Jennifer King," a Stanford HAI Policy Fellow.

23andMe Clients Navigate Uncertain Future Two Years After Breach

Bloomberg Law
Oct 17, 2025

"The biggest difference between 23andMe and other breaches is that sequenced DNA is 'irreplaceable and immutable,' said Jennifer King," a Stanford HAI Policy Fellow.

Law Enforcement and Justice
Regulation, Policy, Governance
Media Mention
Response to OSTP’s Request for Information on the Development of an AI Action Plan
Caroline Meinhardt, Daniel Zhang, Rishi Bommasani, Jennifer King, Russell Wald, Percy Liang, Daniel E. Ho
Mar 17, 2025
Response to Request

Stanford scholars respond to a federal RFI on the development of an AI Action Plan, urging policymakers to promote open and scientific innovation, craft evidence-based AI policy, and empower government leaders.

Response to OSTP’s Request for Information on the Development of an AI Action Plan

Caroline Meinhardt, Daniel Zhang, Rishi Bommasani, Jennifer King, Russell Wald, Percy Liang, Daniel E. Ho
Mar 17, 2025

Stanford scholars respond to a federal RFI on the development of an AI Action Plan, urging policymakers to promote open and scientific innovation, craft evidence-based AI policy, and empower government leaders.

Regulation, Policy, Governance
Response to Request
Be Careful What You Tell Your AI Chatbot
Nikki Goth Itoi
Oct 15, 2025
News

A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.

Be Careful What You Tell Your AI Chatbot

Nikki Goth Itoi
Oct 15, 2025

A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.

Privacy, Safety, Security
Generative AI
Regulation, Policy, Governance
News
2
3
4
5
6