Skip to main content Skip to secondary navigation
Page Content

HAI Policy Briefs

October 2022

Using Algorithm Audits to Understand AI

Artificial intelligence applications are frequently used without any mechanism for external testing or evaluation. Simultaneously, many AI systems present black-box decision-making challenges. Modern machine learning systems are opaque to outside stakeholders, including researchers, who can only probe the system by providing inputs and measuring outputs. Researchers, users, and regulators alike are thus forced to grapple with using, being impacted by, or regulating algorithms they cannot fully observe. This brief reviews the history of algorithm auditing, describes its current state, and offers best practices for conducting algorithm audits today.

Key Takeaways

Policy Brief October 2022

➜ We identified nine considerations for algorithm auditing, including legal and ethical risks, factors of discrimination and bias, and conducting audits continuously so as to not capture just one moment in time.

➜ We found that researchers are activists—working on topics with social and political impacts, and behaving as actors with sociopolitical effects—and must factor the social impact of algorithmic development into their work.

➜ Algorithm auditors must collaborate with other experts and stakeholders, including social scientists, lawyers, ethicists, and the users of algorithmic systems to more comprehensively and ethically understand the impacts of those systems on individuals and society at large.

Read the full brief    View all Policy Briefs

Authors

Contact Us

Email HAI-Policy@stanford.edu for general inquiries about upcoming training programs, briefings, or other policy related work.