Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Using Algorithm Audits to Understand AI | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyPolicy Brief

Using Algorithm Audits to Understand AI

Date
October 06, 2022
Topics
Ethics, Equity, Inclusion
Privacy, Safety, Security
Read Paper
abstract

This brief reviews the history of algorithm auditing, describes its current state, and offers best practices for conducting algorithm audits today.

Key Takeaways

  • We identified nine considerations for algorithm auditing, including legal and ethical risks, factors of discrimination and bias, and conducting audits continuously so as to not capture just one moment in time. 

  • We found that researchers are activists—working on topics with social and political impacts, and behaving as actors with sociopolitical effects—and must factor the social impact of algorithmic development into their work. 

  • Algorithm auditors must collaborate with other experts and stakeholders, including social scientists, lawyers, ethicists, and the users of algorithmic systems to more comprehensively and ethically understand the impacts of those systems on individuals and society at large.

Executive Summary

Artificial Intelligence continues to proliferate, from government services and academic research to the transportation, energy, and healthcare sectors. Yet one of the greatest challenges in using, understanding, and regulating AI persists: the black-box nature of many algorithms.

Dr. Latanya Sweeney’s 2013 paper, “Discrimination in Online Ad Delivery,” speaks to this very point. Sweeney, a professor at Harvard, surveyed 2,184 racially associated names in relation to searches tied to Google AdSense, Google’s service for placing ads at the top of users’ search results pages. All told, she found that ads placed on the page were far more likely to suggest an arrest record under queries for Black-sounding names than white-sounding ones—“raising questions as to whether Google’s advertising technology exposes racial bias in society and how ad and search technology can help develop to assure racial fairness.”

This question of racist or otherwise discriminatory AI is not just a widespread problem—as much other research has uncovered—it is also an issue of blackbox decision-making. With respect to Sweeney’s findings, one possibility is that Google deliberately targeted minority-sounding names with racist suggestions for “arrest records.” It is also possible, however, that internet users were more likely to search Black names and then click on websites mentioning arrest. The harms and the dangers of this algorithmic discrimination are clear, but understanding an algorithm’s decision-making process can be far more difficult. Doing so matters greatly for researchers, policymakers, and the public.

In our paper, titled “Auditing Algorithms: Understanding Algorithmic Systems from the Outside In,” we examine how algorithm audits—like the input- and output-testing Sweeney did for her research—are a powerful technique for understanding AI. In collaboration with researchers from Northeastern University, University of Illinois at Urbana-Champaign, and University of Michigan, we provide an overview of methodologies for algorithm audits, recount two decades of algorithm audits across numerous domains (from health to politics), and propose a set of best practices for conducting algorithm audits. We conclude with a discussion of algorithm audits and their social, ethical, and political implications.

Read Paper
Share
Link copied to clipboard!
Authors
  • Danaë Metaxa
    Danaë Metaxa
  • Jeffrey Hancock
    Jeffrey Hancock

Related Publications

Moving Beyond the Term "Global South" in AI Ethics and Policy
Evani Radiya-Dixit, Angèle Christin
Quick ReadNov 19, 2025
Issue Brief

This brief examines the limitations of the term "Global South" in AI ethics and policy, and highlights the importance of grounding such work in specific regions and power structures.

Issue Brief

Moving Beyond the Term "Global South" in AI Ethics and Policy

Evani Radiya-Dixit, Angèle Christin
Ethics, Equity, InclusionInternational Affairs, International Security, International DevelopmentQuick ReadNov 19

This brief examines the limitations of the term "Global South" in AI ethics and policy, and highlights the importance of grounding such work in specific regions and power structures.

Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee
Jennifer King
Quick ReadNov 18, 2025
Testimony

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

Testimony

Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee

Jennifer King
Privacy, Safety, SecurityQuick ReadNov 18

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

Validating Claims About AI: A Policymaker’s Guide
Olawale Salaudeen, Anka Reuel, Angelina Wang, Sanmi Koyejo
Quick ReadSep 24, 2025
Policy Brief

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

Policy Brief

Validating Claims About AI: A Policymaker’s Guide

Olawale Salaudeen, Anka Reuel, Angelina Wang, Sanmi Koyejo
Foundation ModelsPrivacy, Safety, SecurityQuick ReadSep 24

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

Increasing Fairness in Medicare Payment Algorithms
Marissa Reitsma, Thomas G. McGuire, Sherri Rose
Quick ReadSep 01, 2025
Policy Brief

This brief introduces two algorithms that can promote fairer Medicare Advantage spending for minority populations.

Policy Brief

Increasing Fairness in Medicare Payment Algorithms

Marissa Reitsma, Thomas G. McGuire, Sherri Rose
Ethics, Equity, InclusionHealthcareQuick ReadSep 01

This brief introduces two algorithms that can promote fairer Medicare Advantage spending for minority populations.