HAI Policy Resources
Recent advances in artificial intelligence (AI) have led to excitement about important scientific discoveries and technological innovations. Increasingly, however, researchers in AI safety, ethics, and other disciplines are identifying risks in how AI technologies are developed, deployed, and governed. Academics, policymakers, and technologists have called for more proactive measures to tackle risks associated with AI and its applications. These range from voluntary frameworks to supranational legislation. Legislative action is on the rise. The world’s first legal framework for AI was unveiled on April 21, 2021, when the European Commission published a comprehensive proposal to regulate “high-risk” AI use cases.
Issue Brief, June 2021
Despite the emergence of new machine learning technologies capable of diagnosing diseases, understanding speech, or recognizing images, the enormous economic potential of many digital goods and services remains largely untapped. In this brief, scholars propose a set of policy recommendations that could increase productivity growth, make the U.S. more competitive, and reduce income inequality.
Policy Brief, May 2021
The U.S. Intelligence Community faces a moment of reckoning and AI lies at the heart of it. Since 9/11, America’s intelligence agencies have become hardwired to fight terrorism. Today’s threat landscape, however, is changing dramatically, with a resurgence of great power competition and the rise of cyber threats enabling states and non-state actors to spy, steal, disrupt, destroy, and deceive across vast distances — all without firing a shot.
Policy Brief, March 2021
Natural language processing for mental health monitoring is an emerging use of AI poised to disrupt the landscape of the health care industry. As the profusion of social media platforms allows for the population to share their thoughts and feelings with the world, users’ posts and reactions extend the scope of medical screening methods for psychological disorders such as depression. Users are already being marketed to with sophistication based on these behaviors — why not leverage these technologies for public health?
Policy Brief, Feb 2021
Facial recognition technologies have grown in sophistication and adoption throughout American society. Consumers now use facial recognition technologies (FRT) to unlock their smartphones and cars; retailers use them for targeted advertising and to monitor stores for shoplifters; and, most controversially, law enforcement agencies have turned to FRT to identify suspects. Significant anxieties around the technology have emerged—including privacy concerns, worries about surveillance in both public and private settings, and the perpetuation of racial bias.
Policy Brief, Nov 2020
Facial recognition technology has proliferated throughout society – today it helps us unlock smartphones, access our bank accounts, and receive targeted advertising. FRT is also used in high-stakes situations where the output of the software can lead to substantial effects on a person’s life. This has led to a loud call to understand and regulate the technology. We support the call for rigorous reflection of its use and its accuracy. In this white paper, we look to provide a common understanding of the capabilities and limitations of the technology in order to properly assess its risks and benefits.
White Paper, Nov 2020
Popular culture has envisioned societies of intelligent machines for generations, with Alan Turing notably foreseeing the need for a test to distinguish machines from humans in 1950. Now, advances in artificial intelligence that promise to make creating convincing fake multimedia content like video, images, or audio relatively easy for many. Unfortunately, this will include sophisticated bots with supercharged self-improvement abilities that are capable of generating more dynamic fakes than anything seen before.
Policy Brief, Nov 2020
In this issue brief series, HAI associate director Rob Reich and HAI fellow Marietje Schaake examine how technology will impact public debate, affect the electoral process, and may even determine the election outcome.
Social Media platforms break traditional barriers of distance and time between people and present unique challenges in calculating the precise value of the transactions and interactions they enable. In the case of a company like Facebook, each layer of connections creates value and attracts additional users to the platform. The compounding nature of this phenomenon gives platformssignificant market power. In the face of growing scrutiny from policymakers, the media, and the public, regulators are now considering a number of proposals to ensure platforms do not abuse their market power or restrict the economic benefits of their networks from being more equitably distributed.
Policy Brief, Oct 2020
With advances in AI, researchers can now train computer algorithms to interpret medical images – often with accuracy comparable to physicians. Yet a survey of medical research shows that these algorithms rely on datasets that lack population diversity and could introduce bias into the understanding of a patient’s health condition.
Policy Brief, Oct 2020
While the use of AI spans the breadth of the U.S. federal government, government AI remains uneven at best, and problematic and perhaps dangerous at worst.
Policy Brief, Sep 2020
Susan Athey’s Written Testimony to House Budget Committee Hearing on Machines, Artificial Intelligence, & the Workforce. Video
Testimony, Sep 2020
Input on the European Commission White Paper “On Artificial Intelligence – A European approach to excellence and trust”
White Paper by “Wonks and Techies,” a multi-disciplinary group at Stanford University, cooperating on international technology and policy issues, led by Ms. Marietje Schaake.
Paper, June 2020