Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
research

Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising

Date
October 04, 2023
Topics
Design, Human-Computer Interaction
Your browser does not support the video tag.
Read Paper
abstract

Algorithm audits are powerful tools for studying black-box systems without direct knowledge of their inner workings. While very effective in examining technical components, the method stops short of a sociotechnical frame, which would also consider users themselves as an integral and dynamic part of the system. Addressing this limitation, we propose the concept of sociotechnical auditing: auditing methods that evaluate algorithmic systems at the sociotechnical level, focusing on the interplay between algorithms and users as each impacts the other. Just as algorithm audits probe an algorithm with varied inputs and observe outputs, a sociotechnical audit (STA) additionally probes users, exposing them to different algorithmic behavior and measuring their resulting attitudes and behaviors. As an example of this method, we develop Intervenr, a platform for conducting browser-based, longitudinal sociotechnical audits with consenting, compensated participants. Intervenr investigates the algorithmic content users encounter online, and also coordinates systematic client-side interventions to understand how users change in response. As a case study, we deploy Intervenr in a two-week sociotechnical audit of online advertising (N = 244) to investigate the central premise that personalized ad targeting is more effective on users. In the first week, we observe and collect all browser ads delivered to users, and in the second, we deploy an ablation-style intervention that disrupts normal targeting by randomly pairing participants and swapping all their ads. We collect user-oriented metrics (self-reported ad interest and feeling of representation) and advertiser-oriented metrics (ad views, clicks, and recognition) throughout, along with a total of over 500,000 ads. Our STA finds that targeted ads indeed perform better with users, but also that users begin to acclimate to different ads in only a week, casting doubt on the primacy of personalized ad targeting given the impact of repeated exposure. In comparison with other evaluation methods that only study technical components, or only experiment on users, sociotechnical audits evaluate sociotechnical systems through the interplay of their technical and human components.

Share
Link copied to clipboard!
Authors
  • Michelle Lam
    Michelle Lam
  • Ayush Pandit
  • Colin H. Kalicki
  • Rachit Gupta
  • Poonam Sahoo
  • Danaë Metaxa
    Danaë Metaxa
Related
  • Closed
    Seed Research Grants
    Applications closed on September 15, 2025

    Designed to support new, ambitious, and speculative ideas with the objective of getting initial results

Related Publications

Stories for the Future 2024
Isabelle Levent
Deep DiveMar 31, 2025
Research

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Research

Stories for the Future 2024

Isabelle Levent
Machine LearningGenerative AIArts, HumanitiesCommunications, MediaDesign, Human-Computer InteractionSciences (Social, Health, Biological, Physical)Deep DiveMar 31

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

How Culture Shapes What People Want From AI
Chunchen Xu, Xiao Ge, Daigo Misaki, Hazel Markus, Jeanne Tsai
May 11, 2024
Research
Your browser does not support the video tag.

There is an urgent need to incorporate the perspectives of culturally diverse groups into AI developments. We present a novel conceptual framework for research that aims to expand, reimagine, and reground mainstream visions of AI using independent and interdependent cultural models of the self and the environment. Two survey studies support this framework and provide preliminary evidence that people apply their cultural models when imagining their ideal AI. Compared with European American respondents, Chinese respondents viewed it as less important to control AI and more important to connect with AI, and were more likely to prefer AI with capacities to influence. Reflecting both cultural models, findings from African American respondents resembled both European American and Chinese respondents. We discuss study limitations and future directions and highlight the need to develop culturally responsive and relevant AI to serve a broader segment of the world population.

Research
Your browser does not support the video tag.

How Culture Shapes What People Want From AI

Chunchen Xu, Xiao Ge, Daigo Misaki, Hazel Markus, Jeanne Tsai
Design, Human-Computer InteractionSciences (Social, Health, Biological, Physical)May 11

There is an urgent need to incorporate the perspectives of culturally diverse groups into AI developments. We present a novel conceptual framework for research that aims to expand, reimagine, and reground mainstream visions of AI using independent and interdependent cultural models of the self and the environment. Two survey studies support this framework and provide preliminary evidence that people apply their cultural models when imagining their ideal AI. Compared with European American respondents, Chinese respondents viewed it as less important to control AI and more important to connect with AI, and were more likely to prefer AI with capacities to influence. Reflecting both cultural models, findings from African American respondents resembled both European American and Chinese respondents. We discuss study limitations and future directions and highlight the need to develop culturally responsive and relevant AI to serve a broader segment of the world population.

The AI Arms Race In Health Insurance Utilization Review: Promises Of Efficiency And Risks Of Supercharged Flaws
Michelle Mello, Artem Trotsyuk, Abdoul Jalil Djiberou Mahamadou, Danton Char
Quick ReadJan 06, 2026
Research
Your browser does not support the video tag.

Health insurers and health care provider organizations are increasingly using artificial intelligence (AI) tools in prior authorization and claims processes. AI offers many potential benefits, but its adoption has raised concerns about the role of the “humans in the loop,” users’ understanding of AI, opacity of algorithmic determinations, underperformance in certain tasks, automation bias, and unintended social consequences. To date, institutional governance by insurers and providers has not fully met the challenge of ensuring responsible use. However, several steps could be taken to help realize the benefits of AI use while minimizing risks. Drawing on empirical work on AI use and our own ethical assessments of provider-facing tools as part of the AI governance process at Stanford Health Care, we examine why utilization review has attracted so much AI innovation and why it is challenging to ensure responsible use of AI. We conclude with several steps that could be taken to help realize the benefits of AI use while minimizing risks.

Research
Your browser does not support the video tag.

The AI Arms Race In Health Insurance Utilization Review: Promises Of Efficiency And Risks Of Supercharged Flaws

Michelle Mello, Artem Trotsyuk, Abdoul Jalil Djiberou Mahamadou, Danton Char
HealthcareRegulation, Policy, GovernanceQuick ReadJan 06

Health insurers and health care provider organizations are increasingly using artificial intelligence (AI) tools in prior authorization and claims processes. AI offers many potential benefits, but its adoption has raised concerns about the role of the “humans in the loop,” users’ understanding of AI, opacity of algorithmic determinations, underperformance in certain tasks, automation bias, and unintended social consequences. To date, institutional governance by insurers and providers has not fully met the challenge of ensuring responsible use. However, several steps could be taken to help realize the benefits of AI use while minimizing risks. Drawing on empirical work on AI use and our own ethical assessments of provider-facing tools as part of the AI governance process at Stanford Health Care, we examine why utilization review has attracted so much AI innovation and why it is challenging to ensure responsible use of AI. We conclude with several steps that could be taken to help realize the benefits of AI use while minimizing risks.

The Global AI Vibrancy Tool 2025
Loredana Fattorini, Nestor Maslej, Ray Perrault, Vanessa Parli, John Etchemendy, Yoav Shoham, Katrina Ligett
Deep DiveNov 24, 2025
Research
Your browser does not support the video tag.

This methodological paper presents the Global AI Vibrancy Tool, an interactive suite of visualizations designed to facilitate cross-country comparisons of AI vibrancy across countries, using indicators organized into pillars. The tool offers customizable features that enable users to conduct in-depth country-level comparisons and longitudinal analyses of AI-related metrics.

Research
Your browser does not support the video tag.

The Global AI Vibrancy Tool 2025

Loredana Fattorini, Nestor Maslej, Ray Perrault, Vanessa Parli, John Etchemendy, Yoav Shoham, Katrina Ligett
DemocracyIndustry, InnovationGovernment, Public AdministrationDeep DiveNov 24

This methodological paper presents the Global AI Vibrancy Tool, an interactive suite of visualizations designed to facilitate cross-country comparisons of AI vibrancy across countries, using indicators organized into pillars. The tool offers customizable features that enable users to conduct in-depth country-level comparisons and longitudinal analyses of AI-related metrics.