Skip to main content Skip to secondary navigation
Page Content

New HAI-Funded Research Explores AI in Governance

The impact of the private sector’s use of artificial intelligence (AI) is already a pressing topic in the national conversation. Less known, but also highly significant and impactful, is the rapid proliferation of AI in government -- and its corresponding impact.

Last March, HAI funded research exploring the topic of AI’s growing role in federal agencies through our seed grant program. The findings were published yesterday in a report, “Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies” -- the most comprehensive study of the subject ever conducted in the U.S. The report was commissioned by the Administrative Conference of the United States, an agency that provides advice for all federal regulatory agencies. 
 
Co-led by Dan Ho, one of our newest associate directors, along with Stanford Law Professor David Freeman Engstrom, NYU Law Professor Catherine Sharkey, and California Supreme Court Justice Mariano-Florentino Cuéllar, the study found that -- contrary to the common perception that government’s technology is obsolescent -- nearly half of all federal agencies are already employing AI, often in ways that make government work better and save money. These applications make essential and often beneficial decisions affecting millions of Americans in areas like environmental protection, criminal justice, social welfare benefits, financial regulation, health care, intellectual property and so on. The report also uncovered certain uses that give pause, particularly around the accountability of such tools. 
 
The research epitomizes the interdisciplinary approach that we champion at HAI: while Ho’s home at Stanford is the law school, his work casts an unconventional lens of analytics and data on his field. Indeed, the report involved substantial collaboration amongst thirty Stanford law, computer science, and engineering students, and five NYU Law students. 
 
We spoke with Ho about the findings of this report, why this research matters, his thoughts, and some recommendations for what’s next.
 
Why was it important for you to investigate this topic?
 
The question of actual adoption of AI across government is a topic that has been largely unexplored. Its importance is underscored by our surprising finding that nearly half of federal agencies we examined already have experimented with AI. Many agencies are putting AI to good use. The Social Security Administration, for instance, is using natural language processing to improve the accuracy and speed of disability adjudication. As another example, I’m part of a team with the Regulation, Evaluation, and Governance Lab (RegLab) at Stanford that is working with the Environmental Protection Agency to use machine learning to reduce the significant noncompliance rate under the Clean Water Act, a longstanding and high priority challenge for environmental regulators. 
 
In order for the federal government to employ AI efficiently and fairly in the future, it’s vital that we understand how best to use it and how to ensure it’s not misused. Until now the scope and scale of the federal government’s AI efforts have not been thoroughly investigated. To guide it, and uncover some of the best practices that are already in use, we first need to track and understand it. Or to paraphrase Charles Kettering: a problem identified is half solved. This report is that first step.
 
What are some of the most beneficial  government uses of the technology?
 
Our report documents many beneficial uses. To name a few: the Securities and Exchange Commission can use AI to “shrink the haystack” of potential violations of insider trading; and the Centers for Medicare and Medicaid Services use AI to identify fraud. The Patent and Trademark Office has pioneered techniques to improve the processing of patent and trademark applications. Numerous agencies, like the Food and Drug Administration, the Consumer Financial Protection Bureau, and Housing and Urban Development, also use AI to improve the processing of public complaints or to respond to public inquiries.
 
AI also has great potential for saving Washington (and taxpayers) money. It’s great at automatically identifying patterns, or anomalies, so administrative judges can spot errors in draft decisions and streamline adjudication. The promise is that such tools can help shrink backlogs that have led citizens waiting years for years for benefits determinations. 
 
What are some problems you uncovered?
 
With this rapid adoption, there are obviously a range of important challenges. What has been at the top of public debate has been the fears about government surveillance and control, as seen in debates about facial recognition, autonomous weapons, and the use of risk assessment scores for life-altering bail, sentencing, and parole decisions. 
 
Our report goes beyond this and documents a range of other challenges when you dig past the headline-grabbing cases. First and foremost, government agencies often lack the human capital to take advantage of rapid advances in technology. One agency couldn’t explain its errors because it didn’t have access to proprietary source code. Makeshift solutions and insecure AI can put people at risk. 
 
Second, there are unique legal constraints when the government makes decisions. Government agencies have to explain decisions, but such explanations become increasingly difficult as algorithmic decision making systems displace human judgment. As AI tools proliferate in federal agencies, they must ensure accountability and fidelity to legal norms of transparency, explanation, and non-discrimination, and building out such systems that function effectively within the complex human and institutional environment becomes increasingly important.
 
Is open-sourcing AI code used by the federal government a good idea?
 
There are instances when open-sourcing code can make sense from a transparency and accountability perspective. But there are other instances where open sources can introduce challenges. In the enforcement context, for instance, publishing the model used to identify tax cheats could enable adversarial attacks and gaming. 
 
How does federal use of AI exacerbate public anxiety about government?
 
A long-term concern is that AI can widen the gap between the haves and the have-nots, and fuel political worries. 
 
As government increasingly employs AI, unless we do so with proper safeguards, well-financed and technologically savvy groups could exploit these tools. That, in turn, could disadvantage those with fewer resources or technological capital. 
 
Our report points to the importance of internal accountability: at agencies that are the most sophisticated in developing these tools, there is an internal culture to challenge, test, and explain new tools, which is critical for trustworthiness of these systems. If the public believes the system is rigged, political support for tech-savvy government may evaporate.
 
What are some of your recommendations?
 
As the government leans more and more on AI, we have to develop and adapt mechanisms for accountability, transparency, and fairness. In some instances, more vetting can be done when an agency adopts AI at the rulemaking stage. Judges will increasingly be grappling with these issues as these tools get challenged in court. 
 
We believe one basic approach would be requiring federal agencies to benchmark AI tools, reserving some decisions for conventional (human) decision making and comparing such outcomes to ones aided by AI tools. Such a “human-alongside-the-loop” approach could smoke out the potential for bias, arbitrariness, and error, increasing the trustworthiness of these tools to officials, legislators, judges, and the public.
 
Like any powerful tool, AI has the potential to make things better or worse. The key is having a clear understanding of the landscape, and convening an open discussion on our next move as a society.
 
What’s next for your research? 
 
I’m excited to be working with the HAI community around issues of AI Governance, as the report underscores the importance of human-centered AI. At HAI, we are in the process of building out a policy initiative so that we can prepare the next generation of technologists and policymakers to wrestle with these issues with a rigorous understanding of AI and law. 
 
At the RegLab, we are working on addressing some of these basic challenges of public sector AI by building partnerships with government agencies to enable government to adapt cutting edge AI to envision the future of data-driven, effective, humane, and fair governance. 
 
We welcome anyone to join us in this venture! 

More News Topics