Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
The AI Regulatory Alignment Problem | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyPolicy Brief

The AI Regulatory Alignment Problem

Date
November 15, 2023
Topics
Regulation, Policy, Governance
Read Paper
abstract

This brief sheds light on the “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes.

In collaboration with

Key Takeaways

  • Although the demand for AI regulation is at a near fever pitch and may reflect a variety of legitimate concerns, four common proposals to regulate AI—mandatory disclosure, registration, licensing, and auditing regimes—are not the magic remedy to cure all that ails AI. Before rushing into regulation, policymakers should consider feasibility, trade-offs, and unintended consequences.

  • Many proposals suffer from what we call the “regulatory alignment problem,” where a regulatory regime’s objective or impact either fails to remediate the AI-related risk at issue (i.e., regulatory mismatch) or conflicts with other societal values and regulatory goals (i.e., value conflict).

  • Establishing an AI super-regulator risks creating redundant, ambiguous, or conflicting jurisdiction given the breadth of AI applications and the number of agencies with existing AI-related regulatory authorities.

  • Adverse event reporting and third party-audits with government oversight can address key impediments to effective regulation by enabling the government to learn about risks of AI models and verify industry claims without drastically increasing its capacity.

  • Policymakers should not expect uniform implementation of regulatory principles absent clear guidance given operationalizing high-level definitions (e.g., “dangerous capabilities”) and AI principles (e.g., “fairness”) is not self-evident, value-neutral or even technically feasible in some cases.

Introduction

While the AI alignment problem—the notion that machine and human values may not be aligned—has arisen as an impetus for regulation, what is less recognized is that hurried calls to regulate create their own regulatory alignment problem, where proposals may distract, fail, or backfire.

In recent Senate testimony, OpenAI chief executive Sam Altman urged Congress to regulate AI, calling for AI safety standards, independent audits, and a new agency to issue licenses for developing advanced AI systems. His testimony echoed calls from various academics and AI researchers, who have long proposed “urgent priorities” for AI governance, including licensing procedures. Legislators have also expressed support for similar proposals. During the Altman hearing, Senator Lindsey Graham voiced support for “an agency that issues a license and can take it away.” He joined Senator Elizabeth Warren in proposing an independent regulatory commission with licensing powers over dominant tech platforms, including those that develop AI. Even more recently, Senators Richard Blumenthal and Josh Hawley proposed a regulatory framework featuring an independent oversight body, licensing and registration requirements for advanced or high-risk AI models, audits, and public disclosures.

But none of these proposals is straightforward to implement. For instance, licensing regimes, at best, may be technically or institutionally infeasible—requiring a dedicated agency, as well as clear eligibility criteria or standards for pre-market evaluations—all of which would take months, if not years to establish. At worst, a licensing scheme may undermine public safety and corporate competition by disproportionately burdening less-resourced actors—impeding useful

safety research and consolidating market power among a handful of well-resourced companies. Many of these concerns are not unique to licensing, but also apply to registration, disclosure, and auditing proposals.

In “AI Regulation Has Its Own Alignment Problem,” we consider the technical and institutional feasibility of four commonly proposed AI regulatory regimes—disclosure, registration, licensing, and auditing—described in the table, and conclude that each suffers from its own regulatory alignment problem. Some proposals may fail to address the problems they set out to solve due to technical or institutional constraints, while others may even worsen those problems or introduce entirely new harms. Proposals that purport to address all that ails AI (e.g., by mandating transparent, fair, privacy-preserving, accurate, and explainable AI) ignore the reality that many goals cannot be jointly satisfied.

Without access to quality information about harms, risks, and performance, regulatory misalignment is almost assured. The current state of affairs—where only a small number of private, self-interested actors know about risks arising from AI—creates “a dangerous dynamic” between industry experts and legislators reliant on industry expertise. The question is, what policies are best situated to address the underlying problem? Rather than rushing to poorly

calibrated or infeasible regulation, policymakers should first seek to enhance the government’s understanding of the risks and reliability of AI systems.

Read Paper
Share
Link copied to clipboard!
Authors
  • Neel Guha
    Neel Guha
  • Christie M. Lawrence
    Christie M. Lawrence
  • Lindsey A. Gailmard
    Lindsey A. Gailmard
  • Kit T. Rodolfa
    Kit T. Rodolfa
  • Faiz Surani
    Faiz Surani
  • Rishi Bommasani
    Rishi Bommasani
  • Inioluwa Deborah Raji
    Inioluwa Deborah Raji
  • Mariano-Florentino Cuéllar
    Mariano-Florentino Cuéllar
  • Colleen Honigsberg
    Colleen Honigsberg
  • Percy Liang
    Percy Liang
  • Dan Ho headshot
    Daniel E. Ho

Related Publications

Response to OSTP's Request for Information on Accelerating the American Scientific Enterprise
Rishi Bommasani, John Etchemendy, Surya Ganguli, Daniel E. Ho, Guido Imbens, James Landay, Fei-Fei Li, Russell Wald
Quick ReadDec 26, 2025
Response to Request

Stanford scholars respond to a federal RFI on scientific discovery, calling for the government to support a new “team science” academic research model for AI-enabled discovery.

Response to Request

Response to OSTP's Request for Information on Accelerating the American Scientific Enterprise

Rishi Bommasani, John Etchemendy, Surya Ganguli, Daniel E. Ho, Guido Imbens, James Landay, Fei-Fei Li, Russell Wald
Sciences (Social, Health, Biological, Physical)Regulation, Policy, GovernanceQuick ReadDec 26

Stanford scholars respond to a federal RFI on scientific discovery, calling for the government to support a new “team science” academic research model for AI-enabled discovery.

Response to FDA's Request for Comment on AI-Enabled Medical Devices
Desmond C. Ong, Jared Moore, Nicole Martinez-Martin, Caroline Meinhardt, Eric Lin, William Agnew
Quick ReadDec 02, 2025
Response to Request

Stanford scholars respond to a federal RFC on evaluating AI-enabled medical devices, recommending policy interventions to help mitigate the harms of AI-powered chatbots used as therapists.

Response to Request

Response to FDA's Request for Comment on AI-Enabled Medical Devices

Desmond C. Ong, Jared Moore, Nicole Martinez-Martin, Caroline Meinhardt, Eric Lin, William Agnew
HealthcareRegulation, Policy, GovernanceQuick ReadDec 02

Stanford scholars respond to a federal RFC on evaluating AI-enabled medical devices, recommending policy interventions to help mitigate the harms of AI-powered chatbots used as therapists.

Russ Altman’s Testimony Before the U.S. Senate Committee on Health, Education, Labor, and Pensions
Russ Altman
Quick ReadOct 09, 2025
Testimony

In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.

Testimony

Russ Altman’s Testimony Before the U.S. Senate Committee on Health, Education, Labor, and Pensions

Russ Altman
HealthcareRegulation, Policy, GovernanceSciences (Social, Health, Biological, Physical)Quick ReadOct 09

In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.

Michelle M. Mello's Testimony Before the U.S. House Committee on Energy and Commerce Health Subcommittee
Michelle Mello
Quick ReadSep 02, 2025
Testimony

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Health hearing titled “Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies,” Michelle M. Mello calls for policy changes that will promote effective integration of AI tools into healthcare by strengthening trust.

Testimony

Michelle M. Mello's Testimony Before the U.S. House Committee on Energy and Commerce Health Subcommittee

Michelle Mello
HealthcareRegulation, Policy, GovernanceQuick ReadSep 02

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Health hearing titled “Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies,” Michelle M. Mello calls for policy changes that will promote effective integration of AI tools into healthcare by strengthening trust.