Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyIssue Brief

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act

Date
June 30, 2025
Topics
Regulation, Policy, Governance
Privacy, Safety, Security
Read Paper
abstract

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

In collaboration with

Key Takeaways

  • Adverse event reporting systems enable policymakers, industry, and downstream users to learn about AI risks from real-world use.

  • These systems don’t necessarily require massive new spending or agencies—they can be developed iteratively, scaled over time, and supported through strategic partnerships.

  • Reporting allows both regulators and industry to respond proactively by surfacing problems quickly, which promotes a culture of safety.

  • Reporting means better policymaking by providing policymakers with evidence to fill regulatory gaps only where they actually exist.

Why Pre-Deployment Testing Alone Cannot Identify All AI Risks

For policymakers trying to proactively address AI risks, one of the most persistent—and underappreciated—problems is that the most serious risks of advanced AI systems often don’t emerge until after deployment. While much recent attention has focused on pre-deployment risk assessments—testing, evaluation, and red-teaming—these efforts cannot fully anticipate how models will behave in real-world use. Systems like GPT-4, Claude, and DeepSeek continue to surprise even their developers with unexpected capabilities and behaviors post-release. And the uncertainty surrounding model capabilities and risks only grows as general-purpose models are deployed in complex environments.

Pre-deployment testing is important, but it is not enough. If policymakers want to ensure that AI development serves the public interest, they need mechanisms that allow government, industry, and society to learn about the technology as it evolves—and to respond when things go wrong.

This is one lesson from earlier governance failures of new and emerging digital technology. Social media platforms were not required to monitor or report harms systematically, and, as a result, policymakers were largely blind to emerging risks until crises like mental health harms provoked reactive responses. AI models may very well follow the same path unless we build capacity to capture and responsibly react to new information.

Right now, most of the information about how these systems perform post-deployment is held by private companies, out of reach of policymakers and the public. Closing that gap requires more than asking companies to “do better” with voluntary commitments—it requires building public infrastructure for learning. One central tool for this is adverse event reporting.

Adverse event reporting systems are already used to surface harms, detrimental events, errors, or malfunctions in other domains. Applied to AI, these systems would provide a structured way to collect reports of model failures, misuse, or unexpected behavior from developers and downstream users. By enabling iterative, evidence-based policymaking, adverse event reporting can help regulators move from guessing about potential risks to understanding what is happening.

If policymakers are serious about regulating AI in a way that is effective, adaptive, scalable, and sustainable, building an adverse event reporting system should be a top priority. Without it, government risks flying blind.

Read Paper
Share
Link copied to clipboard!
Authors
  • Lindsey A. Gailmard
    Lindsey A. Gailmard
  • drew spence
    Drew Spence
  • Dan Ho headshot
    Daniel E. Ho

Related Publications

Response to OSTP's Request for Information on Accelerating the American Scientific Enterprise
Rishi Bommasani, John Etchemendy, Surya Ganguli, Daniel E. Ho, Guido Imbens, James Landay, Fei-Fei Li, Russell Wald
Quick ReadDec 26, 2025
Response to Request

Stanford scholars respond to a federal RFI on scientific discovery, calling for the government to support a new “team science” academic research model for AI-enabled discovery.

Response to Request

Response to OSTP's Request for Information on Accelerating the American Scientific Enterprise

Rishi Bommasani, John Etchemendy, Surya Ganguli, Daniel E. Ho, Guido Imbens, James Landay, Fei-Fei Li, Russell Wald
Sciences (Social, Health, Biological, Physical)Regulation, Policy, GovernanceQuick ReadDec 26

Stanford scholars respond to a federal RFI on scientific discovery, calling for the government to support a new “team science” academic research model for AI-enabled discovery.

Response to FDA's Request for Comment on AI-Enabled Medical Devices
Desmond C. Ong, Jared Moore, Nicole Martinez-Martin, Caroline Meinhardt, Eric Lin, William Agnew
Quick ReadDec 02, 2025
Response to Request

Stanford scholars respond to a federal RFC on evaluating AI-enabled medical devices, recommending policy interventions to help mitigate the harms of AI-powered chatbots used as therapists.

Response to Request

Response to FDA's Request for Comment on AI-Enabled Medical Devices

Desmond C. Ong, Jared Moore, Nicole Martinez-Martin, Caroline Meinhardt, Eric Lin, William Agnew
HealthcareRegulation, Policy, GovernanceQuick ReadDec 02

Stanford scholars respond to a federal RFC on evaluating AI-enabled medical devices, recommending policy interventions to help mitigate the harms of AI-powered chatbots used as therapists.

Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee
Jennifer King
Quick ReadNov 18, 2025
Testimony

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

Testimony

Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee

Jennifer King
Privacy, Safety, SecurityQuick ReadNov 18

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

Russ Altman’s Testimony Before the U.S. Senate Committee on Health, Education, Labor, and Pensions
Russ Altman
Quick ReadOct 09, 2025
Testimony

In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.

Testimony

Russ Altman’s Testimony Before the U.S. Senate Committee on Health, Education, Labor, and Pensions

Russ Altman
HealthcareRegulation, Policy, GovernanceSciences (Social, Health, Biological, Physical)Quick ReadOct 09

In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.