Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Regulation, Policy, Governance | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Regulation, Policy, Governance

How policymakers can best regulate AI to balance innovation with public interests and human rights.

AI Policy Working Group

AI Policy Working Group

Regulation, Policy, GovernanceSep 26
Transparency in AI is on the Decline
Rishi Bommasani, Kevin Klyman, Alexander Wan, Percy Liang
Dec 09, 2025
News
Your browser does not support the video tag.

A new study shows the AI industry is withholding key information.

News
Your browser does not support the video tag.

Transparency in AI is on the Decline

Rishi Bommasani, Kevin Klyman, Alexander Wan, Percy Liang
Foundation ModelsRegulation, Policy, GovernancePrivacy, Safety, SecurityDec 09

A new study shows the AI industry is withholding key information.

AI, Health, and Health Care Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence
Tina Hernandez-Boussard, Michelle Mello, Nigam Shah, Co-authored by 50+ experts
Deep DiveOct 13, 2025
Research
Your browser does not support the video tag.
Research
Your browser does not support the video tag.

AI, Health, and Health Care Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence

Tina Hernandez-Boussard, Michelle Mello, Nigam Shah, Co-authored by 50+ experts
HealthcareRegulation, Policy, GovernanceDeep DiveOct 13
Response to FDA's Request for Comment on AI-Enabled Medical Devices
Desmond C. Ong, Jared Moore, Nicole Martinez-Martin, Caroline Meinhardt, Eric Lin, William Agnew
Quick ReadDec 02, 2025
Response to Request

Stanford scholars respond to a federal RFC on evaluating AI-enabled medical devices, recommending policy interventions to help mitigate the harms of AI-powered chatbots used as therapists.

Response to Request

Response to FDA's Request for Comment on AI-Enabled Medical Devices

Desmond C. Ong, Jared Moore, Nicole Martinez-Martin, Caroline Meinhardt, Eric Lin, William Agnew
HealthcareRegulation, Policy, GovernanceQuick ReadDec 02

Stanford scholars respond to a federal RFC on evaluating AI-enabled medical devices, recommending policy interventions to help mitigate the harms of AI-powered chatbots used as therapists.

Daniel E. Ho
Person
Dan Ho headshot
Person
Dan Ho headshot

Daniel E. Ho

DemocracyGovernment, Public AdministrationLaw Enforcement and JusticeRegulation, Policy, GovernanceOct 05
Our Racist, Terrifying Deepfake Future Is Here
Nature
Nov 03, 2025
Media Mention

“It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI,” says HAI Policy Fellow Riana Pfefferkorn in response to a viral AI-generated deepfake video.

Media Mention
Your browser does not support the video tag.

Our Racist, Terrifying Deepfake Future Is Here

Nature
Generative AIRegulation, Policy, GovernanceLaw Enforcement and JusticeNov 03

“It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI,” says HAI Policy Fellow Riana Pfefferkorn in response to a viral AI-generated deepfake video.

All Work Published on Regulation, Policy, Governance

23andMe Clients Navigate Uncertain Future Two Years After Breach
Bloomberg Law
Oct 17, 2025
Media Mention

"The biggest difference between 23andMe and other breaches is that sequenced DNA is 'irreplaceable and immutable,' said Jennifer King," a Stanford HAI Policy Fellow.

23andMe Clients Navigate Uncertain Future Two Years After Breach

Bloomberg Law
Oct 17, 2025

"The biggest difference between 23andMe and other breaches is that sequenced DNA is 'irreplaceable and immutable,' said Jennifer King," a Stanford HAI Policy Fellow.

Law Enforcement and Justice
Regulation, Policy, Governance
Media Mention
Automated real-time assessment of intracranial hemorrhage detection AI using an ensembled monitoring model (EMM)
Zhongnan Fang, Andrew Johnston, Lina Cheuy, Hye Sun Na, Magdalini Paschali, Camila Gonzalez, Bonnie Armstrong, Arogya Koirala, Derrick Laurel, Andrew Walker Campion, Michael Iv, Akshay Chaudhari, David B. Larson
Deep DiveOct 13, 2025
Research
Your browser does not support the video tag.

Artificial intelligence (AI) tools for radiology are commonly unmonitored once deployed. The lack of real-time case-by-case assessments of AI prediction confidence requires users to independently distinguish between trustworthy and unreliable AI predictions, which increases cognitive burden, reduces productivity, and potentially leads to misdiagnoses. To address these challenges, we introduce Ensembled Monitoring Model (EMM), a framework inspired by clinical consensus practices using multiple expert reviews. Designed specifically for black-box commercial AI products, EMM operates independently without requiring access to internal AI components or intermediate outputs, while still providing robust confidence measurements. Using intracranial hemorrhage detection as our test case on a large, diverse dataset of 2919 studies, we demonstrate that EMM can successfully categorize confidence in the AI-generated prediction, suggest appropriate actions, and help physicians recognize low confidence scenarios, ultimately reducing cognitive burden. Importantly, we provide key technical considerations and best practices for successfully translating EMM into clinical settings.

Automated real-time assessment of intracranial hemorrhage detection AI using an ensembled monitoring model (EMM)

Zhongnan Fang, Andrew Johnston, Lina Cheuy, Hye Sun Na, Magdalini Paschali, Camila Gonzalez, Bonnie Armstrong, Arogya Koirala, Derrick Laurel, Andrew Walker Campion, Michael Iv, Akshay Chaudhari, David B. Larson
Deep DiveOct 13, 2025

Artificial intelligence (AI) tools for radiology are commonly unmonitored once deployed. The lack of real-time case-by-case assessments of AI prediction confidence requires users to independently distinguish between trustworthy and unreliable AI predictions, which increases cognitive burden, reduces productivity, and potentially leads to misdiagnoses. To address these challenges, we introduce Ensembled Monitoring Model (EMM), a framework inspired by clinical consensus practices using multiple expert reviews. Designed specifically for black-box commercial AI products, EMM operates independently without requiring access to internal AI components or intermediate outputs, while still providing robust confidence measurements. Using intracranial hemorrhage detection as our test case on a large, diverse dataset of 2919 studies, we demonstrate that EMM can successfully categorize confidence in the AI-generated prediction, suggest appropriate actions, and help physicians recognize low confidence scenarios, ultimately reducing cognitive burden. Importantly, we provide key technical considerations and best practices for successfully translating EMM into clinical settings.

Healthcare
Regulation, Policy, Governance
Your browser does not support the video tag.
Research
Russ Altman’s Testimony Before the U.S. Senate Committee on Health, Education, Labor, and Pensions
Russ Altman
Quick ReadOct 09, 2025
Testimony

In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.

Russ Altman’s Testimony Before the U.S. Senate Committee on Health, Education, Labor, and Pensions

Russ Altman
Quick ReadOct 09, 2025

In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.

Healthcare
Regulation, Policy, Governance
Sciences (Social, Health, Biological, Physical)
Testimony
Julian Nyarko
Professor, Stanford Law | Associate Director and Senior Fellow, Stanford HAI | Center Fellow, Stanford Institute for Economic Policy Research
Person
Julian Nyarko headshot

Julian Nyarko

Professor, Stanford Law | Associate Director and Senior Fellow, Stanford HAI | Center Fellow, Stanford Institute for Economic Policy Research
Privacy, Safety, Security
Regulation, Policy, Governance
Julian Nyarko headshot
Person
Be Careful What You Tell Your AI Chatbot
Nikki Goth Itoi
Oct 15, 2025
News

A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.

Be Careful What You Tell Your AI Chatbot

Nikki Goth Itoi
Oct 15, 2025

A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.

Privacy, Safety, Security
Generative AI
Regulation, Policy, Governance
News
Developing mental health AI tools that improve care across different groups and contexts
Nicole Martinez-Martin
Deep DiveOct 10, 2025
Research
Your browser does not support the video tag.

In order to realize the potential of mental health AI applications to deliver improved care, a multipronged approach is needed, including representative AI datasets, research practices that reflect and anticipate potential sources of bias, stakeholder engagement, and equitable design practices.

Developing mental health AI tools that improve care across different groups and contexts

Nicole Martinez-Martin
Deep DiveOct 10, 2025

In order to realize the potential of mental health AI applications to deliver improved care, a multipronged approach is needed, including representative AI datasets, research practices that reflect and anticipate potential sources of bias, stakeholder engagement, and equitable design practices.

Healthcare
Regulation, Policy, Governance
Your browser does not support the video tag.
Research
1
2
3
4
5