Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Regulation, Policy, Governance | Stanford HAI
Back to Regulation, Policy, Governance

All Work Published on Regulation, Policy, Governance

Response to the Department of Education’s Request for Information on AI in Education
Victor R. Lee, Vanessa Parli, Isabelle Hau, Patrick Hynes, Daniel Zhang
Quick ReadAug 20, 2025
Response to Request

Stanford scholars respond to a federal RFI on advancing AI in education, urging policymakers to anchor their approach in proven research.

Response to the Department of Education’s Request for Information on AI in Education

Victor R. Lee, Vanessa Parli, Isabelle Hau, Patrick Hynes, Daniel Zhang
Quick ReadAug 20, 2025

Stanford scholars respond to a federal RFI on advancing AI in education, urging policymakers to anchor their approach in proven research.

Education, Skills
Regulation, Policy, Governance
Response to Request
23andMe Clients Navigate Uncertain Future Two Years After Breach
Bloomberg Law
Oct 17, 2025
Media Mention

"The biggest difference between 23andMe and other breaches is that sequenced DNA is 'irreplaceable and immutable,' said Jennifer King," a Stanford HAI Policy Fellow.

23andMe Clients Navigate Uncertain Future Two Years After Breach

Bloomberg Law
Oct 17, 2025

"The biggest difference between 23andMe and other breaches is that sequenced DNA is 'irreplaceable and immutable,' said Jennifer King," a Stanford HAI Policy Fellow.

Law Enforcement and Justice
Regulation, Policy, Governance
Media Mention
Conditional Generative Models for Synthetic Tabular Data: Applications for Precision Medicine and Diverse Representations
Kara Liu, Russ Altman
Deep DiveJan 14, 2025
Research
Your browser does not support the video tag.

Tabular medical datasets, like electronic health records (EHRs), biobanks, and structured clinical trial data, are rich sources of information with the potential to advance precision medicine and optimize patient care. However, real-world medical datasets have limited patient diversity and cannot simulate hypothetical outcomes, both of which are necessary for equitable and effective medical research. Fueled by recent advancements in machine learning, generative models offer a promising solution to these data limitations by generating enhanced synthetic data. This review highlights the potential of conditional generative models (CGMs) to create patient-specific synthetic data for a variety of precision medicine applications. We survey CGM approaches that tackle two medical applications: correcting for data representation biases and simulating digital health twins. We additionally explore how the surveyed methods handle modeling tabular medical data and briefly discuss evaluation criteria. Finally, we summarize the technical, medical, and ethical challenges that must be addressed before CGMs can be effectively and safely deployed in the medical field.

Conditional Generative Models for Synthetic Tabular Data: Applications for Precision Medicine and Diverse Representations

Kara Liu, Russ Altman
Deep DiveJan 14, 2025

Tabular medical datasets, like electronic health records (EHRs), biobanks, and structured clinical trial data, are rich sources of information with the potential to advance precision medicine and optimize patient care. However, real-world medical datasets have limited patient diversity and cannot simulate hypothetical outcomes, both of which are necessary for equitable and effective medical research. Fueled by recent advancements in machine learning, generative models offer a promising solution to these data limitations by generating enhanced synthetic data. This review highlights the potential of conditional generative models (CGMs) to create patient-specific synthetic data for a variety of precision medicine applications. We survey CGM approaches that tackle two medical applications: correcting for data representation biases and simulating digital health twins. We additionally explore how the surveyed methods handle modeling tabular medical data and briefly discuss evaluation criteria. Finally, we summarize the technical, medical, and ethical challenges that must be addressed before CGMs can be effectively and safely deployed in the medical field.

Healthcare
Regulation, Policy, Governance
Your browser does not support the video tag.
Research
Labeling AI-Generated Content May Not Change Its Persuasiveness
Isabel Gallegos, Dr. Chen Shani, Weiyan Shi, Federico Bianchi, Izzy Benjamin Gainsburg, Dan Jurafsky, Robb Willer
Quick ReadJul 30, 2025
Policy Brief

This brief evaluates the impact of authorship labels on the persuasiveness of AI-written policy messages.

Labeling AI-Generated Content May Not Change Its Persuasiveness

Isabel Gallegos, Dr. Chen Shani, Weiyan Shi, Federico Bianchi, Izzy Benjamin Gainsburg, Dan Jurafsky, Robb Willer
Quick ReadJul 30, 2025

This brief evaluates the impact of authorship labels on the persuasiveness of AI-written policy messages.

Generative AI
Regulation, Policy, Governance
Policy Brief
Be Careful What You Tell Your AI Chatbot
Nikki Goth Itoi
Oct 15, 2025
News

A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.

Be Careful What You Tell Your AI Chatbot

Nikki Goth Itoi
Oct 15, 2025

A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.

Privacy, Safety, Security
Generative AI
Regulation, Policy, Governance
News
Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act
Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Quick ReadJun 30, 2025
Issue Brief

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act

Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Quick ReadJun 30, 2025

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Regulation, Policy, Governance
Privacy, Safety, Security
Issue Brief
1
2
3
4
5