Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Radical Proposal: Third-Party Auditor Access for AI Accountability | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Radical Proposal: Third-Party Auditor Access for AI Accountability

Date
October 20, 2021
Topics
Machine Learning

A scholar proposes legal protections and regulatory involvement to support organizations that uncover algorithmic harm.

Algorithmic failures have serious consequences: A well-respected teacher is fired when an automated assessment tool gives her a low rating; a Black man is arrested after being misidentified by a police department’s facial recognition tool; a Latinx businessperson is denied credit by an AI system that relies on information about where she lives rather than her individual creditworthiness.

These sorts of algorithmic abuses are often uncovered and publicized by third-party algorithmic auditors, says Deb Raji, a fellow at the Mozilla Foundation and the Algorithmic Justice League and a PhD student at UC Berkeley who studies algorithmic accountability and evaluation. These auditors scrutinize these systems from the outside, and include civil society groups, law firms, investigative journalists, or academic researchers.

Unfortunately, Raji says, “despite the important impact these third-party auditors have had on deployed AI systems, they are not well supported in their work and are not afforded any legal protections.”

Indeed, many companies have become adept at dodging the poking and prodding of such outsiders who need access to their AI systems in order to determine how they work, Raji says. Some companies have even resorted to legal remedies, such as bringing criminal charges under various anti-hacking laws or filing civil suits to stop auditors from gathering data. 

To support the important work done by third-party auditors, Raji proposes a series of policy interventions that could make rigorous third-party algorithmic audits a reality in the U.S. The proposal involves three key components that would enable and support third-party auditor access and protection: a national incident reporting system to prioritize audits; an independent audit oversight board to certify auditors, set audit standards, and oversee the audit process; and mandated, regulator-facilitated data access for certified third-party auditors.

Raji presented the proposal at Stanford HAI’s “Policy and AI: Four Radical Proposals for a Better Society” conference, held Nov. 9-10, 2021. Watch her presentation below.

Third-Party Access for Algorithmic Accountability: How It Works

Companies often use employees or consultants to perform internal audits called algorithmic impact assessments. But such audits are typically done before an algorithm is deployed in the wild, Raji says. And they tend to focus on meeting the needs of the intended users of the system – a police department, for example – rather than the needs of potentially impacted communities. Moreover, internal audits are rarely publicized, and companies involved in this space have often provided misleading information. “They have not been reliable sources of information about the effectiveness of their own systems,” she says.

 

Read all the proposals:

Universal Basic Income to Offset Job Losses Due to Automation

Data Cooperatives Could Give Us More Power Over Our Data 

Middleware Could Give Consumers Choices Over What They See Online 

 

By contrast, third-party audits are done by independent entities who often represent an impacted group and have no contractual relationship with the company. These audits are directed at a very specific evaluation that has potential repercussions and consequences. And these audits can address harms that go beyond bias to include ecological, safety, or privacy impacts as well as a system’s failure to live up to appropriate standards for transparency, explainability, and accountability. 

“It’s really important to have these kinds of audits because they provide concrete evidence focused on the concerns of an affected population,” Raji says. 

How to Make Third-Party Auditor Protections a Reality

The Federal Trade Commission could play a key role in implementing this proposal, Raji says. “They’re an agency that has a lot of access granted to them already through the FTC Act, and there’s an opportunity for them to share that access with qualified third-party auditors.” In addition, in their consumer protection role, the FTC already has an incident database and a vetting process for third-party auditors, as well as the legal infrastructure to act as an enforcement agency. “They are positioned well to execute on this proposal in the next couple of years,” she says.

Raji concedes that algorithmic auditing is a nascent field with no professional codes of conduct or standards for what constitutes a thorough audit. Nevertheless, she says, many affected populations feel an urgency to address the ways AI is harming them right now, so it’s important that her proposal be implemented quickly. 

“I think it’s step zero to allow qualified representatives an opportunity to advocate on behalf of affected communities – to ask questions about the technology that’s impacting them; to collect evidence of that impact; to try to stop the inappropriate use of that technology; and to protect themselves from retaliation when they raise issues of algorithmic harm,” she says.

Watch the Presentation

 

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Share
Link copied to clipboard!
Contributor(s)
Katharine Miller

Related News

Digital Twins Offer Insights into Brains Struggling with Math — and Hope for Students
Andrew Myers
Jun 06, 2025
News

Researchers used artificial intelligence to analyze the brain scans of students solving math problems, offering the first-ever peek into the neuroscience of math disabilities.

News

Digital Twins Offer Insights into Brains Struggling with Math — and Hope for Students

Andrew Myers
Machine LearningSciences (Social, Health, Biological, Physical)Jun 06

Researchers used artificial intelligence to analyze the brain scans of students solving math problems, offering the first-ever peek into the neuroscience of math disabilities.

Better Benchmarks for Safety-Critical AI Applications
Nikki Goth Itoi
May 27, 2025
News
Business graph digital concept

Stanford researchers investigate why models often fail in edge-case scenarios.

News
Business graph digital concept

Better Benchmarks for Safety-Critical AI Applications

Nikki Goth Itoi
Machine LearningMay 27

Stanford researchers investigate why models often fail in edge-case scenarios.

The AI Race Has Gotten Crowded—and China Is Closing In on the US
Wired
Apr 07, 2025
Media Mention

Vanessa Parli, Stanford HAI Director of Research and AI Index Steering Committee member, notes that the 2025 AI Index reports flourishing and higher-quality academic research in AI.

Media Mention
Your browser does not support the video tag.

The AI Race Has Gotten Crowded—and China Is Closing In on the US

Wired
Regulation, Policy, GovernanceEconomy, MarketsFinance, BusinessGenerative AIIndustry, InnovationMachine LearningSciences (Social, Health, Biological, Physical)Apr 07

Vanessa Parli, Stanford HAI Director of Research and AI Index Steering Committee member, notes that the 2025 AI Index reports flourishing and higher-quality academic research in AI.