Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Dialing in Patient Attitudes: The Ethics of AI in Medical Decision-making | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

Dialing in Patient Attitudes: The Ethics of AI in Medical Decision-making

Date
March 24, 2022
Topics
Healthcare

A multidisciplinary team examines the burgeoning field of AI medical diagnostics and says that AI’s analytical powers must incorporate patients’ values and doctors’ insights.

Is that spot on the X-ray or CT scan something to worry about? It is a question that, thankfully, most people never have to face. But for those who do, the answer can be life-altering.

Increasingly, doctors are calling on artificial intelligence to help diagnose conditions ranging from cancer and heart attack to sepsis and traumatic brain injuries. While AI can sometimes spot concerns that a human might miss, it’s not perfect. Artificial diagnostic aids sometimes recommend unnecessary invasive procedures or miss something that should have been flagged.

These two kinds of mistakes have very different consequences. Deciding whether a scan requires further investigation or not means choosing between risks: the risk of doing an unnecessary procedure and the risk of missing a serious condition. Different patients weight these risks differently. Currently, however, the patient’s own values and preferences about how to weight these risks are not always part of the AI decision-making calculation.

This ethical conundrum raises profound questions for doctors and AI programmers alike, says ethicist and Stanford Institute for Human-Centered AI fellow Kathleen Creel. With a timely commentary in the journal Nature Medicine, Creel and a multidisciplinary team of co-authors with expertise in radiology, philosophy, and AI say the resolution to that dilemma is clear—AI should put the values of the patient first.

Read the full commentary: "Clinical Decisions Using AI Must Consider Patient Values"

 

“In a clinical setting, there is no one-size-fits-all approach to diagnostics. AI, as a field, should accommodate this reality by being flexible to the patient’s personal perspectives on risk,” Creel says.

In a borderline case, should a doctor tell the patient there is a concern and risk an unnecessary, potentially invasive procedure that could turn out to be nothing? (A false positive result.) Or should that same doctor, knowing the patient’s preference is to avoid an unnecessary procedure at all costs, tell the patient there is nothing to worry about when, in fact, there could be plenty to worry about? (A false negative result.)

Risk-averse patients prefer to do anything they can to avoid a false negative. Others really don’t want to undergo an avoidable surgery. “That's their priority—keep me out of the hospital,” Creel says. “AI designers must build in sensitivity and flexibility to address both types of patients equally and fairly.”

Playing the Percentages

AI in medical devices typically calculates a probability—the likelihood that a spot in a scan is cancer or some other disease—and then makes a recommendation to the doctor whether to investigate further.

There are three general approaches to using AI in these circumstances. The first is the status quo: The algorithm calculates the probability of concern and if it exceeds a threshold—say better than 80 percent likelihood of cancer—the patient is automatically recommended for follow-up. This approach relies exclusively on AI without human input. The doctor knows only AI’s recommendation, not the probability it used to reach it or the threshold, which is often set by programmers.

In a second approach, AI directly incorporates a patient’s values and attitude toward risk and uses this information to set a personalized threshold as to whether to proceed or not. This approach incorporates patient values, but not the doctor’s clinical judgment.

In the third approach, AI not only provides the recommendation but also tells the doctor the probability of disease. It is then up to the doctor’s expertise and knowledge of the patient’s wishes whether to recommend follow-up.

Creel and her co-authors express concerns about the first two approaches, recommending the third. The first unacceptably ignores patient values and wishes. Both the first and second approaches leave the doctor out of the decision, which patient focus groups at both Stanford and Washington University rejected. Instead, patients preferred variants of the third approach, in which doctors incorporate patient values and AI’s percentage to set a personalized threshold for each patient.

Tuning in to Patient Concerns

Creel and colleagues argue that thresholds for converting a probability into a medical decision should be based on specific patient values. Patients should take a brief, pre-examination survey that probes their reactions to hypothetical outcomes to learn about their attitudes toward over- and under-diagnosis, their worries about false-positive and false-negative results, concerns about over- and under-treatment, and quality-of-life issues should they require treatment.

For instance, the questionnaire might ask a patient to respond to statements like: “I would rather risk surgical complications to treat a benign tumor than risk missing a cancerous tumor.”

It is not unlike tuning a radio—thresholds’ varying degrees could be dialed to a patient’s relative risk-tolerance score. An algorithm might be tuned to a threshold of, say, 90 percent or higher for a treatment-averse patient. For a risk-averse patient whose greatest fear is a false negative, the threshold might be adjusted lower to flag concerns more often.

“Patients deserve to have their values reflected in this debate and in the algorithms,” Creel says. “Adding a degree of patient advocacy would be a positive step in the evolution of AI in medical diagnostics.”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Share
Link copied to clipboard!
Contributor(s)
Andrew Myers

Related News

What Your Phone Knows Could Help Scientists Understand Your Health
Katharine Miller
Mar 04, 2026
News
Woman using social media microblogging app on her smart phone

Stanford scientists have released an open-source platform that lets health researchers study the “screenome” – the digital traces of our daily lives – while protecting participants’ privacy.

News
Woman using social media microblogging app on her smart phone

What Your Phone Knows Could Help Scientists Understand Your Health

Katharine Miller
HealthcareMar 04

Stanford scientists have released an open-source platform that lets health researchers study the “screenome” – the digital traces of our daily lives – while protecting participants’ privacy.

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform
Dylan Walsh
Mar 03, 2026
News

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.

News

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform

Dylan Walsh
Computer VisionHealthcareSciences (Social, Health, Biological, Physical)Machine LearningMar 03

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.

From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems
Nikki Goth Itoi
Feb 27, 2026
News

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.

News

From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems

Nikki Goth Itoi
Generative AIHealthcarePrivacy, Safety, SecurityComputer VisionSciences (Social, Health, Biological, Physical)Feb 27

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.