
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
While the AI alignment problem—the notion that machine and human values may not be aligned—has arisen as an impetus for regulation, what is less recognized is that hurried calls to regulate create their own regulatory alignment problem, where proposals may distract, fail, or backfire. In this brief, we shed light on this “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes. Some proposals may fail to address the problems they set out to solve due to technical or institutional constraints, while others may even worsen those problems or introduce entirely new harms.
A new study reveals the need for benchmarking and public evaluations of AI tools in law.