HAI Issue Brief
The European Commission’s Artificial Intelligence Act
Recent advances in artificial intelligence (AI) have led to excitement about important scientific discoveries and technological innovations. Increasingly, however, researchers in AI safety, ethics, and other disciplines are identifying risks in how AI technologies are developed, deployed, and governed. Academics, policymakers, and technologists have called for more proactive measures to tackle risks associated with AI and its applications. These range from voluntary frameworks to supranational legislation. Legislative action is on the rise. The world’s first legal framework for AI was unveiled on April 21, 2021, when the European Commission published a comprehensive proposal to regulate “high-risk” AI use cases.
➜ To address the variety of risks associated with societal adoption of AI, the European Commission has proposed a set of regulations that promote the uptake of AI and try to mitigate or prevent harms associated with certain uses of the technology.
➜ Under the proposal, developers of high-risk AI systems will need to perform both pre-deployment conformity assessments and post-market monitoring analysis to demonstrate that their systems meet all requirements in the AI Act (AIA)’s risk framework.
➜ The AIA expressly prohibits the use of AI for subliminal distortion of a person’s behavior that may cause physical or mental harm; exploiting vulnerabilities of specific groups of people like the young, the elderly, or persons with disabilities; social scoring that may lead to unjustified or disproportionate detrimental treatment; and real-time remote biometric identification in publicly accessible spaces by law enforcement (except for specific actions like searching for missing persons or counterterrorism operations).