A Better Measuring Stick: Algorithmic Approach to Pain Diagnosis Could Eliminate Racial Bias | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

A Better Measuring Stick: Algorithmic Approach to Pain Diagnosis Could Eliminate Racial Bias

Date
February 24, 2021
Topics
Healthcare
Machine Learning
Tefi | Shutterstock

Traditional approaches to pain management don’t treat all patients the same. AI could level the playing field.

Among the many mysteries in medical science, it is known that minority and low-income patients experience greater pain than other parts of the population. This is true regardless of the root cause of the pain and even when comparing patients with similar levels of disease severity. Now, a team of researchers, including Stanford computer scientist Jure Leskovec, has used AI to more accurately and more fairly measure severe knee pain.

Today, when patients with knee pain visit the doctor, the severity of their osteoarthritis is rated on what is known as the Kellgren and Lawrence Grade (KLG). However, even for two patients with similar osteoarthritis and the same KLG score, low-income populations report more pain.  Consequently, the underserved fail to qualify for knee-replacement surgeries and are more often treated with risky opioid painkillers.

A Definitive Answer

The racial/socioeconomic pain disparity in KLG scores has even led some to wonder: Perhaps the pain is not solely caused by injury, but is being made worse by other factors not in the knee, such as stress.

To answer that question definitively, Leskovec and a group of colleagues from Stanford, Harvard, the University of Chicago, and Berkeley turned to artificial intelligence. They developed a machine-learning algorithm to show that the standard radiographic measures of pain used today — namely KLG — may be overlooking certain features of injured knees that cause pain.

What’s more, these biases unfavorably and disproportionately affect how pain is treated in underserved minority and low-income populations. The new algorithmic approach evaluates patient X-rays and quantifying pain levels much more accurately — and more fairly.

“By using X-rays exclusively, we show the pain is, in fact, in the knee, not somewhere else,” Leskovec says. “What’s more, X-rays contain these patterns loud and clear but KLG cannot read them. We developed an AI-based solution that can learn to read these previously unknown patterns.”

Were the pain not in the knee itself, adds Leskovec, a Stanford Institute for Human-Centered Artificial Intelligence faculty member, even AI would fail to capture it. It turns out, that KLG overlooks these patterns and doesn’t accurately “read” pain from the objective criteria in the knee. The bottom line is that AI can remove the bias in the way knee pain is measured and, by extension, how it is treated. Consequently, more minority and low-income patients would qualify for knee-replacement surgeries.

Factoring All Pain Points

Leskovec and his collaborators began with a diverse database of over 4,000 patients and more than 35,000 images of their damaged knees. It included almost 20 percent Black patients and large numbers of lower-income and lower-educated patients.

The machine learning algorithm then evaluated the scans of all the patients and other demographic and health data, such as race, income, and body mass index, and predicted patient pain levels. The team was able to then parse the data in various ways, separating just the Black patients, for instance, or looking only at low-income populations, to compare algorithmic performance and test various hypotheses.

The bottom line, Leskovec says, is that the models trained using the diverse training data sets were the most accurate in predicting pain and reduced the racial and socioeconomic disparity in pain scores.

“The pain is in the knee,” Leskovec says. “Still useful as it is, KLG was developed in the 1950s using a not very diverse population and, consequently, it overlooks important knee pain indicators. This shows the importance to AI of using diverse and representative data.”

Better Clinical Decision Making

Leskovec notes that AI will certainly not replace the physician’s expertise in pain management decisions; rather, he sees it aiding decisions. The algorithm not only scores pain more accurately but presents additional visual data that could prove helpful in the clinic such as “heat maps” of areas of the knee most affected by pain that might help physicians notice problems not apparent in the KLG evaluation and, for instance, choose to prescribe fewer opioids and get knee replacements to more patients in these underserved populations.

As Leskovec’s work shows, artificial intelligence balances inequalities. It more accurately reads knee pain and could greatly expand and improve treatment options for these traditionally underserved patients.

“We think AI could become a powerful tool in the treatment of pain across all parts of society,” Leskovec says.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Tefi | Shutterstock
Share
Link copied to clipboard!
Contributor(s)
Andrew Myers
Related
  • The Geographic Bias in Medical AI Tools
    Shana Lynch
    Sep 21
    news

    Patient data from just three states trains most AI diagnostic tools.

Related News

Collaborative Coding, Better Scaling, Health Tracking: HAI Awards $2.17M to Innovative Research
Nikki Goth Itoi
Apr 29, 2026
Announcement
Your browser does not support the video tag.

Seed grants will fund 29 research teams pursuing novel research ideas across disciplines.

Announcement
Your browser does not support the video tag.

Collaborative Coding, Better Scaling, Health Tracking: HAI Awards $2.17M to Innovative Research

Nikki Goth Itoi
HealthcareSciences (Social, Health, Biological, Physical)Apr 29

Seed grants will fund 29 research teams pursuing novel research ideas across disciplines.

An AI Health Coach Could Change Your Mindset
Katharine Miller
Apr 23, 2026
News
A runner with a smartphone laces her shoes

Bloom, a health coaching app created by Stanford researchers, helps people tap into their own motivations.

News
A runner with a smartphone laces her shoes

An AI Health Coach Could Change Your Mindset

Katharine Miller
HealthcareGenerative AIApr 23

Bloom, a health coaching app created by Stanford researchers, helps people tap into their own motivations.

Using LLMs To Improve Workplace Social Skills
Katharine Miller
Apr 20, 2026
News
A woman takes notes while working on a tablet

Practicing specific social skills with AI chatbots helps users build confidence and competence.

News
A woman takes notes while working on a tablet

Using LLMs To Improve Workplace Social Skills

Katharine Miller
Education, SkillsGenerative AIHealthcareApr 20

Practicing specific social skills with AI chatbots helps users build confidence and competence.