Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
AI's Promise and Peril for the U.S. Government | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyPolicy Brief

AI's Promise and Peril for the U.S. Government

Date
September 01, 2020
Topics
Government, Public Administration
Read Paper
abstract

This brief examines AI uses among federal administrative agencies, highlighting governance concerns related to accountability, technological quality, and societal conflict.

Key Takeaways

  • Few federal agencies are using AI in ways that rival the private sector’s sophistication and prowess, yet AI use is widespread and poses numerous governance questions.

  • AI tools used by the federal government need to reflect transparency and society’s longstanding legal, political and ethical foundations.

  • At federal agencies, many of the most compelling AI tools were created from within by innovative, publicspirited technologists – not profit-driven private contractors.

Executive Summary

While the use of artificial intelligence (AI) spans the breadth of the U.S. federal government, government AI remains uneven at best, and problematic and perhaps dangerous at worst. Our research team of lawyers and computer scientists examined AI uses among federal administrative agencies – from facial recognition to insider trading and health care fraud, for example. Our report, commissioned by the Administrative Conference of the United States and generously supported by Stanford Law School, NYU Law School, and Stanford’s Institute for Human-Centered AI, is the most comprehensive study of the subject ever conducted in the United States. The report's findings reveal deep concerns about growing government use of these tools, and so we suggest how AI could be unleashed to make the federal government work better, more fairly, and at lower cost. 

In March 2019, the Stanford Institute for Human-Centered Artificial Intelligence funded research exploring the topic of AI’s growing role in federal agencies. The project culminated in the 122-page report, “Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies,” which was commissioned by the Administrative Conference of the United States, an agency that provides advice across federal agencies. 

In the big picture, AI promises to transform how government agencies do their work by reducing the cost of core governance functions, improving decision-making, and using the power of big data for greater efficiency. Many benefits exist. In the enforcement context, the Securities and Exchange Commission can use AI to “shrink the haystack” of potential violations of insider trading, and the Centers for Medicare and Medicaid Services use AI to identify fraud, for example. AI tools can help administrative judges spot errors in draft decisions adjudicating disability benefits and help examiners at the Patent and Trademark Office process patent and trademark applications more efficiently and accurately. The Food and Drug Administration, the Consumer Financial Protection Bureau, and Housing and Urban Development currently task AI to engage the public, by sifting through millions of citizen complaints. Others have experimented with chatbots to field questions from welfare beneficiaries, asylum seekers, and taxpayers. 

While the benefits are real and tangible, key issues and problems remain. Questions arise, for example, about the proper design of algorithms and user interfaces, the respective scope of human and machine decisionmaking, the boundaries between public actions and private contracting, the capacity to learn over time using AI, and whether the use of AI is even permitted in certain contexts.

Read Paper
Share
Link copied to clipboard!
Authors
  • David Engstrom
    David Engstrom
  • Dan Ho headshot
    Daniel E. Ho
  • Catherine M. Sharkey
    Catherine M. Sharkey
  • Mariano-Florentino Cuéllar
    Mariano-Florentino Cuéllar

Related Publications

Cleaning Up Policy Sludge: An AI Statutory Research System
Faiz Surani, Lindsey A. Gailmard, Allison Casasola, Varun Magesh, Emily J. Robitschek, Christine Tsang, Derek Ouyang, Daniel E. Ho
Quick ReadJun 18, 2025
Policy Brief

This brief introduces a novel AI tool that performs statutory surveys to help governments—such as the San Francisco City Attorney Office—identify policy sludge and accelerate legal reform.

Policy Brief

Cleaning Up Policy Sludge: An AI Statutory Research System

Faiz Surani, Lindsey A. Gailmard, Allison Casasola, Varun Magesh, Emily J. Robitschek, Christine Tsang, Derek Ouyang, Daniel E. Ho
Government, Public AdministrationQuick ReadJun 18

This brief introduces a novel AI tool that performs statutory surveys to help governments—such as the San Francisco City Attorney Office—identify policy sludge and accelerate legal reform.

Assessing the Implementation of Federal AI Leadership and Compliance Mandates
Jennifer Wang, Mirac Suzgun, Caroline Meinhardt, Daniel Zhang, Kazia Nowacki, Daniel E. Ho
Deep DiveJan 17, 2025
White Paper

This white paper assesses federal efforts to advance leadership on AI innovation and governance through recent executive actions and emphasizes the need for senior-level leadership to achieve a whole-of-government approach.

White Paper

Assessing the Implementation of Federal AI Leadership and Compliance Mandates

Jennifer Wang, Mirac Suzgun, Caroline Meinhardt, Daniel Zhang, Kazia Nowacki, Daniel E. Ho
Government, Public AdministrationRegulation, Policy, GovernanceDeep DiveJan 17

This white paper assesses federal efforts to advance leadership on AI innovation and governance through recent executive actions and emphasizes the need for senior-level leadership to achieve a whole-of-government approach.

Expanding Academia’s Role in Public Sector AI
Kevin Klyman, Aaron Bao, Caroline Meinhardt, Daniel Zhang, Elena Cryst, Russell Wald
Quick ReadDec 04, 2024
Issue Brief
Expanding Academias role in public sector ai

This brief analyzes the disparity between academia and industry in frontier AI research and presents policy recommendations for ensuring a stronger role for academia in public sector AI.

Issue Brief
Expanding Academias role in public sector ai

Expanding Academia’s Role in Public Sector AI

Kevin Klyman, Aaron Bao, Caroline Meinhardt, Daniel Zhang, Elena Cryst, Russell Wald
Government, Public AdministrationQuick ReadDec 04

This brief analyzes the disparity between academia and industry in frontier AI research and presents policy recommendations for ensuring a stronger role for academia in public sector AI.

Daniel E. Ho's Testimony Before the California Senate Governmental Organization Committee and the Senate Budget and Fiscal Review Subcommittee No. 4 on State Administration and General Government
Daniel E. Ho
Quick ReadFeb 21, 2024
Testimony

In this testimony presented in the California Senate Hearing “California at the Forefront: Steering AI Towards Ethical Horizons,” Daniel E. Ho offers three recommendations for how California should lead the nation in responsible AI innovation by nurturing and attracting technical talent into public service, democratizing access to computing and data resources, and addressing the information asymmetry about AI risks.

Testimony

Daniel E. Ho's Testimony Before the California Senate Governmental Organization Committee and the Senate Budget and Fiscal Review Subcommittee No. 4 on State Administration and General Government

Daniel E. Ho
Regulation, Policy, GovernanceGovernment, Public AdministrationQuick ReadFeb 21

In this testimony presented in the California Senate Hearing “California at the Forefront: Steering AI Towards Ethical Horizons,” Daniel E. Ho offers three recommendations for how California should lead the nation in responsible AI innovation by nurturing and attracting technical talent into public service, democratizing access to computing and data resources, and addressing the information asymmetry about AI risks.