Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
New Large Language Model Helps Patients Understand Their Radiology Reports | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

New Large Language Model Helps Patients Understand Their Radiology Reports

Date
June 23, 2025
Topics
Healthcare
Natural Language Processing

‘RadGPT’ cuts through medical jargon to answer common patient questions.

Imagine getting an MRI of your knees and being told you have “mild intrasubstance degeneration of the posterior horn of the medial meniscus.”

Chances are most of us who didn’t go to medical school are not going to be able to decipher that jargon as anything meaningful or understand what is actionable from that diagnosis. That’s why Stanford radiologists developed a large language model to help address patients’ medical concerns and questions about X-rays, CTs, MRIs, ultrasounds, PET scans, and angiograms.

Using this model, a patient getting a knee MRI could get a more useful and simple explanation: Your knee’s meniscus is a tissue in your knee that serves as a cushion, and, like a pillow, the meniscus has gone a little flat but still can function. 

This LLM – dubbed “RadGPT” – can extract concepts from a radiologists’ report to then provide an explanation of that concept and suggest possible follow-up questions. The research was published this month in the Journal of the American College of Radiology.

Traditionally, medical expertise is needed to understand the technical reports radiologists write about patient scans, said Curtis Langlotz, Stanford professor of radiology, of medicine, and of biomedical data science, senior fellow at the Stanford Institute for Human-Centered AI (HAI), and senior author of the study. “We hope that our technology won’t just help to explain the results, but will also help to improve the communication between doctor and patient.”

Since 2021, under the 21st Century Cures Act, patients in the United States have had federal protection to get electronic access to their own radiology reports. But tools like RadGPT could get patients more engaged in their care, Langlotz believes, because they can better understand what their test results actually mean.

“Doctors don’t always have the time to go through and explain reports, line by line,” Langlotz said. “I think patients who really do understand what’s in their medical record are going to get better care and will ask better questions.”

To develop RadGPT, the Stanford team took 30 sample radiology reports and extracted five concepts from each report. With those 150 concepts, they developed explanations for them and three question-and-answer pairs that patients might commonly ask. Five radiologists who reviewed these explanations determined that the system is unlikely to produce hallucinations or other harmful explanations.

AI is still a ways away from being able to accurately interpret raw scans. Instead, the current RadGPT model depends on a human radiologist dictating a report, and only then will the system extract concepts from what they have written. 

“As with any other healthcare technology, safety is absolutely paramount,” said Sanna Herwald, the study’s lead author and a Stanford resident in graduate medical education. “The reason this study is so exciting is because the RadGPT-generated materials were generally deemed safe without further modification. This means that RadGPT is a promising tool that may, after further testing and validation, directly educate patients about their urgent or incidental imaging findings in real time at the patient’s convenience.”

While this LLM still has to be tested in a clinical setting, Langlotz believes the LLMs that are the underpinnings of this technology will not only benefit patients in getting answers to common medical questions but also radiologists, who can either be more productive or be able to take breaks to reduce burnout.

“If you look at self-reports of cognitive load – the amount of work your brain is doing throughout a day – radiology is right at the top of that list.”

Share
Link copied to clipboard!
Contributor(s)
Vignesh Ramachandran

Related News

AI Reveals How Brain Activity Unfolds Over Time
Andrew Myers
Jan 21, 2026
News
Medical Brain Scans on Multiple Computer Screens. Advanced Neuroimaging Technology Reveals Complex Neural Pathways, Display Showing CT Scan in a Modern Medical Environment

Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease.

News
Medical Brain Scans on Multiple Computer Screens. Advanced Neuroimaging Technology Reveals Complex Neural Pathways, Display Showing CT Scan in a Modern Medical Environment

AI Reveals How Brain Activity Unfolds Over Time

Andrew Myers
HealthcareSciences (Social, Health, Biological, Physical)Jan 21

Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Why 'Zero-Shot' Clinical Predictions Are Risky
Suhana Bedi, Jason Alan Fries, and Nigam H. Shah
Jan 07, 2026
News
Doctor reviews a tablet in the foreground while other doctors and nurses stand over a medical bed in the background

These models generate plausible timelines from historical patterns; without calibration and auditing, their “probabilities” may not reflect reality.

News
Doctor reviews a tablet in the foreground while other doctors and nurses stand over a medical bed in the background

Why 'Zero-Shot' Clinical Predictions Are Risky

Suhana Bedi, Jason Alan Fries, and Nigam H. Shah
HealthcareFoundation ModelsJan 07

These models generate plausible timelines from historical patterns; without calibration and auditing, their “probabilities” may not reflect reality.