Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Dialing in Patient Attitudes: The Ethics of AI in Medical Decision-making | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Dialing in Patient Attitudes: The Ethics of AI in Medical Decision-making

Date
March 24, 2022
Topics
Healthcare

A multidisciplinary team examines the burgeoning field of AI medical diagnostics and says that AI’s analytical powers must incorporate patients’ values and doctors’ insights.

Is that spot on the X-ray or CT scan something to worry about? It is a question that, thankfully, most people never have to face. But for those who do, the answer can be life-altering.

Increasingly, doctors are calling on artificial intelligence to help diagnose conditions ranging from cancer and heart attack to sepsis and traumatic brain injuries. While AI can sometimes spot concerns that a human might miss, it’s not perfect. Artificial diagnostic aids sometimes recommend unnecessary invasive procedures or miss something that should have been flagged.

These two kinds of mistakes have very different consequences. Deciding whether a scan requires further investigation or not means choosing between risks: the risk of doing an unnecessary procedure and the risk of missing a serious condition. Different patients weight these risks differently. Currently, however, the patient’s own values and preferences about how to weight these risks are not always part of the AI decision-making calculation.

This ethical conundrum raises profound questions for doctors and AI programmers alike, says ethicist and Stanford Institute for Human-Centered AI fellow Kathleen Creel. With a timely commentary in the journal Nature Medicine, Creel and a multidisciplinary team of co-authors with expertise in radiology, philosophy, and AI say the resolution to that dilemma is clear—AI should put the values of the patient first.

Read the full commentary: "Clinical Decisions Using AI Must Consider Patient Values"

 

“In a clinical setting, there is no one-size-fits-all approach to diagnostics. AI, as a field, should accommodate this reality by being flexible to the patient’s personal perspectives on risk,” Creel says.

In a borderline case, should a doctor tell the patient there is a concern and risk an unnecessary, potentially invasive procedure that could turn out to be nothing? (A false positive result.) Or should that same doctor, knowing the patient’s preference is to avoid an unnecessary procedure at all costs, tell the patient there is nothing to worry about when, in fact, there could be plenty to worry about? (A false negative result.)

Risk-averse patients prefer to do anything they can to avoid a false negative. Others really don’t want to undergo an avoidable surgery. “That's their priority—keep me out of the hospital,” Creel says. “AI designers must build in sensitivity and flexibility to address both types of patients equally and fairly.”

Playing the Percentages

AI in medical devices typically calculates a probability—the likelihood that a spot in a scan is cancer or some other disease—and then makes a recommendation to the doctor whether to investigate further.

There are three general approaches to using AI in these circumstances. The first is the status quo: The algorithm calculates the probability of concern and if it exceeds a threshold—say better than 80 percent likelihood of cancer—the patient is automatically recommended for follow-up. This approach relies exclusively on AI without human input. The doctor knows only AI’s recommendation, not the probability it used to reach it or the threshold, which is often set by programmers.

In a second approach, AI directly incorporates a patient’s values and attitude toward risk and uses this information to set a personalized threshold as to whether to proceed or not. This approach incorporates patient values, but not the doctor’s clinical judgment.

In the third approach, AI not only provides the recommendation but also tells the doctor the probability of disease. It is then up to the doctor’s expertise and knowledge of the patient’s wishes whether to recommend follow-up.

Creel and her co-authors express concerns about the first two approaches, recommending the third. The first unacceptably ignores patient values and wishes. Both the first and second approaches leave the doctor out of the decision, which patient focus groups at both Stanford and Washington University rejected. Instead, patients preferred variants of the third approach, in which doctors incorporate patient values and AI’s percentage to set a personalized threshold for each patient.

Tuning in to Patient Concerns

Creel and colleagues argue that thresholds for converting a probability into a medical decision should be based on specific patient values. Patients should take a brief, pre-examination survey that probes their reactions to hypothetical outcomes to learn about their attitudes toward over- and under-diagnosis, their worries about false-positive and false-negative results, concerns about over- and under-treatment, and quality-of-life issues should they require treatment.

For instance, the questionnaire might ask a patient to respond to statements like: “I would rather risk surgical complications to treat a benign tumor than risk missing a cancerous tumor.”

It is not unlike tuning a radio—thresholds’ varying degrees could be dialed to a patient’s relative risk-tolerance score. An algorithm might be tuned to a threshold of, say, 90 percent or higher for a treatment-averse patient. For a risk-averse patient whose greatest fear is a false negative, the threshold might be adjusted lower to flag concerns more often.

“Patients deserve to have their values reflected in this debate and in the algorithms,” Creel says. “Adding a degree of patient advocacy would be a positive step in the evolution of AI in medical diagnostics.”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Share
Link copied to clipboard!
Contributor(s)
Andrew Myers

Related News

AI Reveals How Brain Activity Unfolds Over Time
Andrew Myers
Jan 21, 2026
News
Medical Brain Scans on Multiple Computer Screens. Advanced Neuroimaging Technology Reveals Complex Neural Pathways, Display Showing CT Scan in a Modern Medical Environment

Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease.

News
Medical Brain Scans on Multiple Computer Screens. Advanced Neuroimaging Technology Reveals Complex Neural Pathways, Display Showing CT Scan in a Modern Medical Environment

AI Reveals How Brain Activity Unfolds Over Time

Andrew Myers
HealthcareSciences (Social, Health, Biological, Physical)Jan 21

Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease.

Why 'Zero-Shot' Clinical Predictions Are Risky
Suhana Bedi, Jason Alan Fries, and Nigam H. Shah
Jan 07, 2026
News
Doctor reviews a tablet in the foreground while other doctors and nurses stand over a medical bed in the background

These models generate plausible timelines from historical patterns; without calibration and auditing, their “probabilities” may not reflect reality.

News
Doctor reviews a tablet in the foreground while other doctors and nurses stand over a medical bed in the background

Why 'Zero-Shot' Clinical Predictions Are Risky

Suhana Bedi, Jason Alan Fries, and Nigam H. Shah
HealthcareFoundation ModelsJan 07

These models generate plausible timelines from historical patterns; without calibration and auditing, their “probabilities” may not reflect reality.

Stanford Researchers: AI Reality Check Imminent
Forbes
Dec 23, 2025
Media Mention

Shana Lynch, HAI Head of Content and Associate Director of Communications, pointed out the "'era of AI evangelism is giving way to an era of AI evaluation,'" in her AI predictions piece, where she interviewed several Stanford AI experts on their insights for AI impacts in 2026.

Media Mention
Your browser does not support the video tag.

Stanford Researchers: AI Reality Check Imminent

Forbes
Generative AIEconomy, MarketsHealthcareCommunications, MediaDec 23

Shana Lynch, HAI Head of Content and Associate Director of Communications, pointed out the "'era of AI evangelism is giving way to an era of AI evaluation,'" in her AI predictions piece, where she interviewed several Stanford AI experts on their insights for AI impacts in 2026.