Skip to main content Skip to secondary navigation
Page Content

HAI Policy Briefs

September 2020

AI’s Promise and Peril for the U.S. Government

While the use of artificial intelligence (AI) spans the breadth of the U.S. federal government, government AI remains uneven at best, and problematic and perhaps dangerous at worst. Our research team of lawyers and computer scientists examined AI uses among federal administrative agencies – from facial recognition to insider trading and health care fraud, for example. Our report, commissioned by the Administrative Conference of the United States and generously supported by Stanford Law School, NYU Law School, and Stanford’s Institute for Human-Centered AI, is the most comprehensive study of the subject ever conducted in the United States. The report's findings reveal deep concerns about growing government use of these tools, and so we suggest how AI could be unleashed to make the federal government work better, more fairly, and at lower cost.

Key Takeaways

Policy Brief September 2020

➜  Few federal agencies are using AI in ways that rival the private sector’s sophistication and prowess, yet AI use is widespread and poses numerous governance questions.

➜ AI tools used by the federal government need to reflect transparency and society’s longstanding legal, political and ethical foundations.

➜ At federal agencies, many of the most compelling AI tools were created from within by innovative, publicspirited technologists – not profit-driven private contractors.

 

Authors

David Freeman Engstrom - Stanford University
Daniel E. Ho - Stanford University
Catherine M. Sharkey - New York University
Mariano-Florentino Cuéllar - California Supreme Court

Read the full brief