Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
The Complexities of Race Adjustment in Health Algorithms | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyPolicy Brief

The Complexities of Race Adjustment in Health Algorithms

Date
September 26, 2024
Topics
Healthcare
Ethics, Equity, Inclusion
Read Paper
abstract

This policy brief explores the complexities of accounting for race in clinical algorithms for evaluating kidney disease and the implications for tackling deep-seated health inequities.

In collaboration with

Key Takeaways

  • Chronic kidney disease affects more than 1 in 7 adults in the United States, with much higher rates of kidney failure among racial and ethnic minorities.

  • Drawing on Stanford Health Care data on more than half a million patients from 2019-23, we conducted the first assessment of how a new clinical algorithm for evaluating chronic kidney disease that no longer adjusts for race impacts clinical decision-making.

  • We find that the new algorithm lowered kidney health estimates for Black or African American patients, meaning they were classified into more severe stages of the disease. Despite these impacts, we observed no meaningful change to nephrology referrals and visits after the new algorithm was introduced.

  • Technical “fixes” alone are insufficient to address deep-seated health inequities. Policymakers should incentivize rigorous evaluations of new or modified clinical algorithms prior to deployment, where possible, and invest in tackling non-algorithmic, structural causes of health inequities in chronic kidney disease and other conditions.

Executive Summary

Chronic kidney disease affects more than 1 in 7 adults—or about 37 million people—in the United States. For racial and ethnic minorities, the burden of kidney failure is higher: Black or African American and Hispanic patients are at least 3-fold and 1.5-fold more likely to progress to kidney failure in comparison to non-Hispanic white patients, in part due to delays in referrals and visits to nephrology. Despite recognition of these disparities in the 1980s, there has been little to no improvement since then.

There are debates about how to account for race in algorithms that are widely used to gauge the severity of kidney disease and inform related care decisions. For a long time, race was considered a factor when assessing kidney disease severity. Two of the most widely adopted kidney-disease-related equations incorporated a Black or non-Black race variable. Because the use of race variables in clinical algorithms propagates racial bias in decision-making, two professional organizations helped develop a different clinical algorithm that does not incorporate race in 2021.

Our paper, “Algorithmic Changes Are Not Enough: Evaluating the Removal of Race Adjustment from the eGFR Equation,” is the first to assess the 2021 equation’s effect on care decision-making for chronic kidney disease patients, including its impact on care disparities for racial and ethnic minorities. Our study estimates the effects of implementing the kidney disease equation without race adjustment on nephrology referrals and visits for patients within the Stanford Health Care system.

While our study focuses on a single medical center and a single disease, the findings present important considerations for the healthcare field. As policymakers, healthcare practitioners, and technologists alike pursue the application of AI and machine learning (ML) algorithms in healthcare, our research underscores the need for health equity research and highlights the limitations of employing technical “fixes” to address deep-seated health inequities.

Introduction

Clinical algorithms are used in many healthcare contexts, and the treatment of chronic kidney disease is no exception. Primary care providers typically rely on an equation that estimates how well a kidney filters waste and toxins from the blood—also known as the estimated glomerular filtration rate (eGFR)—to gauge the severity of the disease. Patients with lower eGFR values are classified into more severe chronic kidney disease stages.

The two most widely adopted equations, the MDRD Study equation and the CKD-EPI 2009 equation, both incorporate data on serum creatinine (a key indicator of how well a kidney filters blood), age, sex, and Black versus non-Black race. The race variable leads to an increase in eGFR values for patients documented as Black or African American.

In 2021, amid growing concerns about racial bias in algorithms, health professionals developed CKD-EPI 2021, a new equation that no longer incorporated a patient’s race among its variables. In validation studies, this new equation underpredicted true kidney filtration rates for Black patients and overpredicted those for non-Black patients. By lowering eGFR values for Black patients, CKD-EPI 2021 was thought to promote early detection and treatment of chronic kidney disease and ultimately reduce downstream disparities in kidney disease diagnosis and treatment.

CKD-EPI 2021 has been implemented and deployed in many healthcare systems without having been thoroughly evaluated for its impact on care decision-making and health outcomes. There is a strong need to ameliorate harm expeditiously where possible, but how to do this most effectively when the impact of new equations is unknown is a broader question for healthcare professionals, policymakers, and patients.

Our study assesses the effects of CKD-EPI 2021 on patient referrals and visits for nephrology care at Stanford Health Care, which began using the new equation without race adjustment for chemistry panels and point-of-care services on December 1, 2021. We analyzed electronic health record data from Stanford Health Care hospitals and clinics on 574,194 adult patients aged 21 and older who had at least one recorded serum creatinine value between January 1, 2019, and September 1, 2023. Among the patients we studied, 5 percent were documented as Black or African American, the overall mean age was 48 years, and 55 percent were female.

Our analysis compared differences in eGFR values and chronic kidney disease stages calculated by CKD-EPI 2009, the equation with race adjustment employed before December 2021, and the values calculated by CKD-EPI 2021, the equation without race adjustment implemented beginning in December 2021. To assess health outcomes, we compared quarterly rates of nephrology referrals, which are often prerequisites of nephrology visits, as well as the visits themselves. We defined quarters to align with the implementation of the new equation, starting in December 2021.

Read Paper
Share
Link copied to clipboard!
Authors
  • Marika Cusick
    Marika Cusick
  • Glenn Chertow
    Glenn Chertow
  • Douglas Owens
    Douglas Owens
  • Michelle Williams
    Michelle Williams
  • Sherri Rose
    Sherri Rose

Related Publications

Response to FDA's Request for Comment on AI-Enabled Medical Devices
Desmond C. Ong, Jared Moore, Nicole Martinez-Martin, Caroline Meinhardt, Eric Lin, William Agnew
Quick ReadDec 02, 2025
Response to Request

Stanford scholars respond to a federal RFC on evaluating AI-enabled medical devices, recommending policy interventions to help mitigate the harms of AI-powered chatbots used as therapists.

Response to Request

Response to FDA's Request for Comment on AI-Enabled Medical Devices

Desmond C. Ong, Jared Moore, Nicole Martinez-Martin, Caroline Meinhardt, Eric Lin, William Agnew
HealthcareRegulation, Policy, GovernanceQuick ReadDec 02

Stanford scholars respond to a federal RFC on evaluating AI-enabled medical devices, recommending policy interventions to help mitigate the harms of AI-powered chatbots used as therapists.

Moving Beyond the Term "Global South" in AI Ethics and Policy
Evani Radiya-Dixit, Angèle Christin
Quick ReadNov 19, 2025
Issue Brief

This brief examines the limitations of the term "Global South" in AI ethics and policy, and highlights the importance of grounding such work in specific regions and power structures.

Issue Brief

Moving Beyond the Term "Global South" in AI Ethics and Policy

Evani Radiya-Dixit, Angèle Christin
Ethics, Equity, InclusionInternational Affairs, International Security, International DevelopmentQuick ReadNov 19

This brief examines the limitations of the term "Global South" in AI ethics and policy, and highlights the importance of grounding such work in specific regions and power structures.

Russ Altman’s Testimony Before the U.S. Senate Committee on Health, Education, Labor, and Pensions
Russ Altman
Quick ReadOct 09, 2025
Testimony

In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.

Testimony

Russ Altman’s Testimony Before the U.S. Senate Committee on Health, Education, Labor, and Pensions

Russ Altman
HealthcareRegulation, Policy, GovernanceSciences (Social, Health, Biological, Physical)Quick ReadOct 09

In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.

Michelle M. Mello's Testimony Before the U.S. House Committee on Energy and Commerce Health Subcommittee
Michelle Mello
Quick ReadSep 02, 2025
Testimony

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Health hearing titled “Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies,” Michelle M. Mello calls for policy changes that will promote effective integration of AI tools into healthcare by strengthening trust.

Testimony

Michelle M. Mello's Testimony Before the U.S. House Committee on Energy and Commerce Health Subcommittee

Michelle Mello
HealthcareRegulation, Policy, GovernanceQuick ReadSep 02

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Health hearing titled “Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies,” Michelle M. Mello calls for policy changes that will promote effective integration of AI tools into healthcare by strengthening trust.