AI's Promise and Peril for the U.S. Government

This brief examines AI uses among federal administrative agencies, highlighting governance concerns related to accountability, technological quality, and societal conflict.
Key Takeaways
Few federal agencies are using AI in ways that rival the private sector’s sophistication and prowess, yet AI use is widespread and poses numerous governance questions.
AI tools used by the federal government need to reflect transparency and society’s longstanding legal, political and ethical foundations.
At federal agencies, many of the most compelling AI tools were created from within by innovative, publicspirited technologists – not profit-driven private contractors.
Executive Summary
While the use of artificial intelligence (AI) spans the breadth of the U.S. federal government, government AI remains uneven at best, and problematic and perhaps dangerous at worst. Our research team of lawyers and computer scientists examined AI uses among federal administrative agencies – from facial recognition to insider trading and health care fraud, for example. Our report, commissioned by the Administrative Conference of the United States and generously supported by Stanford Law School, NYU Law School, and Stanford’s Institute for Human-Centered AI, is the most comprehensive study of the subject ever conducted in the United States. The report's findings reveal deep concerns about growing government use of these tools, and so we suggest how AI could be unleashed to make the federal government work better, more fairly, and at lower cost.
In March 2019, the Stanford Institute for Human-Centered Artificial Intelligence funded research exploring the topic of AI’s growing role in federal agencies. The project culminated in the 122-page report, “Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies,” which was commissioned by the Administrative Conference of the United States, an agency that provides advice across federal agencies.
In the big picture, AI promises to transform how government agencies do their work by reducing the cost of core governance functions, improving decision-making, and using the power of big data for greater efficiency. Many benefits exist. In the enforcement context, the Securities and Exchange Commission can use AI to “shrink the haystack” of potential violations of insider trading, and the Centers for Medicare and Medicaid Services use AI to identify fraud, for example. AI tools can help administrative judges spot errors in draft decisions adjudicating disability benefits and help examiners at the Patent and Trademark Office process patent and trademark applications more efficiently and accurately. The Food and Drug Administration, the Consumer Financial Protection Bureau, and Housing and Urban Development currently task AI to engage the public, by sifting through millions of citizen complaints. Others have experimented with chatbots to field questions from welfare beneficiaries, asylum seekers, and taxpayers.
While the benefits are real and tangible, key issues and problems remain. Questions arise, for example, about the proper design of algorithms and user interfaces, the respective scope of human and machine decisionmaking, the boundaries between public actions and private contracting, the capacity to learn over time using AI, and whether the use of AI is even permitted in certain contexts.







