Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Risks of AI Race Detection in the Medical System | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyPolicy Brief

Risks of AI Race Detection in the Medical System

Date
December 01, 2021
Topics
Healthcare
Ethics, Equity, Inclusion
Read Paper
abstract

This brief warns that AI systems that infer patients’ race in medical settings could deepen existing healthcare disparities.

Key Takeaways

  • Algorithms that guess a patient’s race, without medical professionals even knowing it, may exacerbate already serious health and patient care disparities between racial groups.

  • Technical “de-biasing” techniques often discussed for other algorithms, like distorting inputs (e.g., altering images), may have little effectiveness with medical imaging AI.

  • This research was only made possible due to the efforts of several universities and hospitals to make open medical data a public good, allowing our researchers to explore important research questions without conflicts with commercial interests.

  • Future research on AI medical imaging regulation and approval should include audits explicitly focused on evaluating an algorithm’s assessment on data that includes racial identity, sex, and age.

Executive Summary

Artificial Intelligence (AI) is being deployed for a range of tasks across the medical system, from patient face-scanning to early-stage cancer detection. The U.S. Food and Drug Administration (FDA) and other regulatory bodies around the world are in the process of vetting a range of such algorithms for use. Many in the field hope these AI systems can lower the costs of care, increase the accuracy of medical diagnostics, and boost hospital efficiency, among other benefits.

At the same time, however, AI systems drawing conclusions about demographic information could seriously exacerbate disparities in the medical system—and this is especially true with race. Left unexamined and unchecked, algorithms that both accurately and inaccurately make assessments of patients’ racial identity could possibly worsen long-standing inequities across the quality and cost of—and access to—care.

Extensive research has already documented how facial and image recognition systems are often more accurate at recognizing lighter-skinned faces than darker-skinned ones. In practice, this has led to facial recognition systems that wrongly identify one Black person as another, or algorithms that do not even recognize darker skin tones. On the flip side, there has been much discussion about what kind of harm could be inflicted when AI systems classify race remarkably well—accurate recognition tools could be used to harm people of color as well.

A groundbreaking series of findings was recently reported by a large international AI research consortium led by Dr. Judy Gichoya, an assistant professor at Emory University, in Reading Race: AI Recognizes Patient’s Racial Identity In Medical Images. This work explores how well AI models, of the kind already deployed in the medical field, can be trained to predict a patient’s race. The investigator team, including researchers from Stanford Center for Artificial Intelligence in Medical & Imaging (AIMI), worked together to apply multiple, commonly deployed machine learning (ML) models to large, publicly and privately available datasets of medical images. These databases included everything from chest and limb X-rays to CT scans of the lungs to mammogram screenings. 

Human experts cannot determine a patient’s race on these medical imaging examinations, and so, until our study, it was never seriously investigated as it was not thought possible. To our surprise, we found that AI models can very reliably predict self-reported race from medical images across multiple imaging modalities, datasets, and clinical tasks. Even when we altered characteristics like age, tissue density, and body habitus (physique), the models’ accuracy held true. In and of itself, this may be concerning, as this attribute could be exploited to reproduce or exacerbate racial inequalities in medicine. But the greater risk is that AI systems will trivially learn to predict a patient’s race, without a medical professional even realizing it and reinforce disparate outcomes. Since medical professionals often do not have access to patient race data when performing routine tasks (like a clinical radiologist reviewing a medical image), they would not be able to notice if an algorithm was routinely making bad or harmful decisions based on patient race. 

Far more than a medical professional issue, these findings matter for users, developers, and regulators overseeing AI technologies.

Read Paper
Share
Link copied to clipboard!
Authors
  • Matthew Lungren
    Matthew Lungren

Related Publications

Toward Responsible AI in Health Insurance Decision-Making
Michelle Mello, Artem Trotsyuk, Abdoul Jalil Djiberou Mahamadou, Danton Char
Quick ReadFeb 10, 2026
Policy Brief

This brief proposes governance mechanisms for the growing use of AI in health insurance utilization review.

Policy Brief

Toward Responsible AI in Health Insurance Decision-Making

Michelle Mello, Artem Trotsyuk, Abdoul Jalil Djiberou Mahamadou, Danton Char
HealthcareRegulation, Policy, GovernanceQuick ReadFeb 10

This brief proposes governance mechanisms for the growing use of AI in health insurance utilization review.

Response to FDA's Request for Comment on AI-Enabled Medical Devices
Desmond C. Ong, Jared Moore, Nicole Martinez-Martin, Caroline Meinhardt, Eric Lin, William Agnew
Quick ReadDec 02, 2025
Response to Request

Stanford scholars respond to a federal RFC on evaluating AI-enabled medical devices, recommending policy interventions to help mitigate the harms of AI-powered chatbots used as therapists.

Response to Request

Response to FDA's Request for Comment on AI-Enabled Medical Devices

Desmond C. Ong, Jared Moore, Nicole Martinez-Martin, Caroline Meinhardt, Eric Lin, William Agnew
HealthcareRegulation, Policy, GovernanceQuick ReadDec 02

Stanford scholars respond to a federal RFC on evaluating AI-enabled medical devices, recommending policy interventions to help mitigate the harms of AI-powered chatbots used as therapists.

Moving Beyond the Term "Global South" in AI Ethics and Policy
Evani Radiya-Dixit, Angèle Christin
Quick ReadNov 19, 2025
Issue Brief

This brief examines the limitations of the term "Global South" in AI ethics and policy, and highlights the importance of grounding such work in specific regions and power structures.

Issue Brief

Moving Beyond the Term "Global South" in AI Ethics and Policy

Evani Radiya-Dixit, Angèle Christin
Ethics, Equity, InclusionInternational Affairs, International Security, International DevelopmentQuick ReadNov 19

This brief examines the limitations of the term "Global South" in AI ethics and policy, and highlights the importance of grounding such work in specific regions and power structures.

Russ Altman’s Testimony Before the U.S. Senate Committee on Health, Education, Labor, and Pensions
Russ Altman
Quick ReadOct 09, 2025
Testimony

In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.

Testimony

Russ Altman’s Testimony Before the U.S. Senate Committee on Health, Education, Labor, and Pensions

Russ Altman
HealthcareRegulation, Policy, GovernanceSciences (Social, Health, Biological, Physical)Quick ReadOct 09

In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.