The AI Regulatory Alignment Problem

This brief sheds light on the “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes.
Key Takeaways
Although the demand for AI regulation is at a near fever pitch and may reflect a variety of legitimate concerns, four common proposals to regulate AI—mandatory disclosure, registration, licensing, and auditing regimes—are not the magic remedy to cure all that ails AI. Before rushing into regulation, policymakers should consider feasibility, trade-offs, and unintended consequences.
Many proposals suffer from what we call the “regulatory alignment problem,” where a regulatory regime’s objective or impact either fails to remediate the AI-related risk at issue (i.e., regulatory mismatch) or conflicts with other societal values and regulatory goals (i.e., value conflict).
Establishing an AI super-regulator risks creating redundant, ambiguous, or conflicting jurisdiction given the breadth of AI applications and the number of agencies with existing AI-related regulatory authorities.
Adverse event reporting and third party-audits with government oversight can address key impediments to effective regulation by enabling the government to learn about risks of AI models and verify industry claims without drastically increasing its capacity.
Policymakers should not expect uniform implementation of regulatory principles absent clear guidance given operationalizing high-level definitions (e.g., “dangerous capabilities”) and AI principles (e.g., “fairness”) is not self-evident, value-neutral or even technically feasible in some cases.
Introduction
While the AI alignment problem—the notion that machine and human values may not be aligned—has arisen as an impetus for regulation, what is less recognized is that hurried calls to regulate create their own regulatory alignment problem, where proposals may distract, fail, or backfire.
In recent Senate testimony, OpenAI chief executive Sam Altman urged Congress to regulate AI, calling for AI safety standards, independent audits, and a new agency to issue licenses for developing advanced AI systems. His testimony echoed calls from various academics and AI researchers, who have long proposed “urgent priorities” for AI governance, including licensing procedures. Legislators have also expressed support for similar proposals. During the Altman hearing, Senator Lindsey Graham voiced support for “an agency that issues a license and can take it away.” He joined Senator Elizabeth Warren in proposing an independent regulatory commission with licensing powers over dominant tech platforms, including those that develop AI. Even more recently, Senators Richard Blumenthal and Josh Hawley proposed a regulatory framework featuring an independent oversight body, licensing and registration requirements for advanced or high-risk AI models, audits, and public disclosures.
But none of these proposals is straightforward to implement. For instance, licensing regimes, at best, may be technically or institutionally infeasible—requiring a dedicated agency, as well as clear eligibility criteria or standards for pre-market evaluations—all of which would take months, if not years to establish. At worst, a licensing scheme may undermine public safety and corporate competition by disproportionately burdening less-resourced actors—impeding useful
safety research and consolidating market power among a handful of well-resourced companies. Many of these concerns are not unique to licensing, but also apply to registration, disclosure, and auditing proposals.
In “AI Regulation Has Its Own Alignment Problem,” we consider the technical and institutional feasibility of four commonly proposed AI regulatory regimes—disclosure, registration, licensing, and auditing—described in the table, and conclude that each suffers from its own regulatory alignment problem. Some proposals may fail to address the problems they set out to solve due to technical or institutional constraints, while others may even worsen those problems or introduce entirely new harms. Proposals that purport to address all that ails AI (e.g., by mandating transparent, fair, privacy-preserving, accurate, and explainable AI) ignore the reality that many goals cannot be jointly satisfied.
Without access to quality information about harms, risks, and performance, regulatory misalignment is almost assured. The current state of affairs—where only a small number of private, self-interested actors know about risks arising from AI—creates “a dangerous dynamic” between industry experts and legislators reliant on industry expertise. The question is, what policies are best situated to address the underlying problem? Rather than rushing to poorly
calibrated or infeasible regulation, policymakers should first seek to enhance the government’s understanding of the risks and reliability of AI systems.








