Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
A Better Measuring Stick: Algorithmic Approach to Pain Diagnosis Could Eliminate Racial Bias | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

A Better Measuring Stick: Algorithmic Approach to Pain Diagnosis Could Eliminate Racial Bias

Date
February 24, 2021
Topics
Healthcare
Machine Learning
Tefi | Shutterstock

Traditional approaches to pain management don’t treat all patients the same. AI could level the playing field.

Among the many mysteries in medical science, it is known that minority and low-income patients experience greater pain than other parts of the population. This is true regardless of the root cause of the pain and even when comparing patients with similar levels of disease severity. Now, a team of researchers, including Stanford computer scientist Jure Leskovec, has used AI to more accurately and more fairly measure severe knee pain.

Today, when patients with knee pain visit the doctor, the severity of their osteoarthritis is rated on what is known as the Kellgren and Lawrence Grade (KLG). However, even for two patients with similar osteoarthritis and the same KLG score, low-income populations report more pain.  Consequently, the underserved fail to qualify for knee-replacement surgeries and are more often treated with risky opioid painkillers.

A Definitive Answer

The racial/socioeconomic pain disparity in KLG scores has even led some to wonder: Perhaps the pain is not solely caused by injury, but is being made worse by other factors not in the knee, such as stress.

To answer that question definitively, Leskovec and a group of colleagues from Stanford, Harvard, the University of Chicago, and Berkeley turned to artificial intelligence. They developed a machine-learning algorithm to show that the standard radiographic measures of pain used today — namely KLG — may be overlooking certain features of injured knees that cause pain.

What’s more, these biases unfavorably and disproportionately affect how pain is treated in underserved minority and low-income populations. The new algorithmic approach evaluates patient X-rays and quantifying pain levels much more accurately — and more fairly.

“By using X-rays exclusively, we show the pain is, in fact, in the knee, not somewhere else,” Leskovec says. “What’s more, X-rays contain these patterns loud and clear but KLG cannot read them. We developed an AI-based solution that can learn to read these previously unknown patterns.”

Were the pain not in the knee itself, adds Leskovec, a Stanford Institute for Human-Centered Artificial Intelligence faculty member, even AI would fail to capture it. It turns out, that KLG overlooks these patterns and doesn’t accurately “read” pain from the objective criteria in the knee. The bottom line is that AI can remove the bias in the way knee pain is measured and, by extension, how it is treated. Consequently, more minority and low-income patients would qualify for knee-replacement surgeries.

Factoring All Pain Points

Leskovec and his collaborators began with a diverse database of over 4,000 patients and more than 35,000 images of their damaged knees. It included almost 20 percent Black patients and large numbers of lower-income and lower-educated patients.

The machine learning algorithm then evaluated the scans of all the patients and other demographic and health data, such as race, income, and body mass index, and predicted patient pain levels. The team was able to then parse the data in various ways, separating just the Black patients, for instance, or looking only at low-income populations, to compare algorithmic performance and test various hypotheses.

The bottom line, Leskovec says, is that the models trained using the diverse training data sets were the most accurate in predicting pain and reduced the racial and socioeconomic disparity in pain scores.

“The pain is in the knee,” Leskovec says. “Still useful as it is, KLG was developed in the 1950s using a not very diverse population and, consequently, it overlooks important knee pain indicators. This shows the importance to AI of using diverse and representative data.”

Better Clinical Decision Making

Leskovec notes that AI will certainly not replace the physician’s expertise in pain management decisions; rather, he sees it aiding decisions. The algorithm not only scores pain more accurately but presents additional visual data that could prove helpful in the clinic such as “heat maps” of areas of the knee most affected by pain that might help physicians notice problems not apparent in the KLG evaluation and, for instance, choose to prescribe fewer opioids and get knee replacements to more patients in these underserved populations.

As Leskovec’s work shows, artificial intelligence balances inequalities. It more accurately reads knee pain and could greatly expand and improve treatment options for these traditionally underserved patients.

“We think AI could become a powerful tool in the treatment of pain across all parts of society,” Leskovec says.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Share
Link copied to clipboard!
Contributor(s)
Andrew Myers
Related
  • The Geographic Bias in Medical AI Tools
    Shana Lynch
    Sep 21
    news

    Patient data from just three states trains most AI diagnostic tools.

Related News

Exploring the Dangers of AI in Mental Health Care
Sarah Wells
Jun 11, 2025
News
Young woman holds up phone to her face

A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.

News
Young woman holds up phone to her face

Exploring the Dangers of AI in Mental Health Care

Sarah Wells
HealthcareGenerative AIJun 11

A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.

Digital Twins Offer Insights into Brains Struggling with Math — and Hope for Students
Andrew Myers
Jun 06, 2025
News

Researchers used artificial intelligence to analyze the brain scans of students solving math problems, offering the first-ever peek into the neuroscience of math disabilities.

News

Digital Twins Offer Insights into Brains Struggling with Math — and Hope for Students

Andrew Myers
Machine LearningSciences (Social, Health, Biological, Physical)Jun 06

Researchers used artificial intelligence to analyze the brain scans of students solving math problems, offering the first-ever peek into the neuroscience of math disabilities.

Better Benchmarks for Safety-Critical AI Applications
Nikki Goth Itoi
May 27, 2025
News
Business graph digital concept

Stanford researchers investigate why models often fail in edge-case scenarios.

News
Business graph digital concept

Better Benchmarks for Safety-Critical AI Applications

Nikki Goth Itoi
Machine LearningMay 27

Stanford researchers investigate why models often fail in edge-case scenarios.