Skip to main content Skip to secondary navigation
Page Content

HAI Policy Briefs

November 2023

The AI Regulatory Alignment Problem

While the AI alignment problem—the notion that machine and human values may not be aligned—has arisen as an impetus for regulation, what is less recognized is that hurried calls to regulate create their own regulatory alignment problem, where proposals may distract, fail, or backfire. In this brief, we shed light on this “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes. Some proposals may fail to address the problems they set out to solve due to technical or institutional constraints, while others may even worsen those problems or introduce entirely new harms.

Key Takeaways

Policy Brief November

➜ Although the demand for AI regulation is at a near fever pitch and may reflect a variety of legitimate concerns, four common proposals to regulate AI—mandatory disclosure, registration, licensing, and auditing regimes—are not the magic remedy to cure all that ails AI. Before rushing into regulation, policymakers should consider feasibility, trade-offs, and unintended consequences.

➜ Many proposals suffer from what we call the “regulatory alignment problem,” where a regulatory regime’s objective or impact either fails to remediate the AI-related risk at issue (i.e., regulatory mismatch) or conflicts with other societal values and regulatory goals (i.e., value conflict).

➜ Establishing an AI super-regulator risks creating redundant, ambiguous, or conflicting jurisdiction given the breadth of AI applications and the number of agencies with existing AI-related regulatory authorities.

➜ Adverse event reporting and third party-audits with government oversight can address key impediments to effective regulation by enabling the government to learn about risks of AI models and verify industry claims without drastically increasing its capacity.

➜ Policymakers should not expect uniform implementation of regulatory principles absent clear guidance given operationalizing high-level definitions (e.g., “dangerous capabilities”) and AI principles (e.g., “fairness”) is not self-evident, value-neutral or even technically feasible in some cases.

Read the full brief 

Authors