Skip to main content Skip to secondary navigation
Page Content

HAI Policy Briefs

December 2021

Risks of AI Race Detection in the Medical System

AI is being deployed for a range of tasks across the medical system, from patient face-scanning to early-stage cancer detection. At the same time, however, AI systems drawing conclusions about demographic information could seriously exacerbate disparities in the medical system—and this is especially true with race. Left unexamined and unchecked, algorithms that both accurately and inaccurately make assessments of patients’ racial identity could possibly worsen long-standing inequities across the quality and cost of—and access to—care.

Key Takeaways

Policy Brief December 2021

➜  Algorithms that guess a patient’s race, without medical professionals even knowing it, may exacerbate already serious health and patient care disparities between racial groups.

➜ Technical “de-biasing” techniques often discussed for other algorithms, like distorting inputs (e.g., altering images), may have little effectiveness with medical imaging AI.

➜ This research was only made possible due to the efforts of several universities and hospitals to make open medical data a public good, allowing our researchers to explore important research questions without conflicts with commercial interests.

➜ Future research on AI medical imaging regulation and approval should include audits explicitly focused on evaluating an algorithm’s assessment on data that includes racial identity, sex, and age.

Read the full brief


Matthew Lungren - Stanford University

 View all Policy Briefs