Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Who’s at Fault when AI Fails in Health Care? | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

Who’s at Fault when AI Fails in Health Care?

Date
March 14, 2024
Topics
Healthcare

Hospitals are increasingly adopting AI tools for patient care. They need to be thinking about liability.

Suppose a young man comes into the hospital. His blood is drawn and the lab results are analyzed by a predictive algorithm that suggests he’s healthy; he can go home. Six weeks later he dies of cardiac arrest. The algorithm, it turns out, didn’t consider the man’s family history, which was riddled with early cardiac deaths.

Who’s to blame? The answer is unclear right now, and that means hospitals need to be thinking through the risk that AI poses to themselves and their patients.

Michelle Mello, a professor with joint appointments at Stanford Law School and the Stanford School of Medicine, discussed this issue in a recent talk titled “Understanding Liability Risk from Health Care AI Tools.” In the talk, she and her collaborator, fourth-year JD/PhD student Neel Guha, explored how hospitals should approach risk given the rise of AI tools in medicine. (The talk was based on an article in The New England Journal of Medicine co-authored by Mello and Guha, a summary of which is available as a policy brief.)

Read the article, Understanding Liability Risk from Using Health Care Artificial Intelligence Tools

“We desperately need this technology in many areas of health care,” Mello says, noting its potentially revolutionary power in patient diagnosis and treatment. “But people are rightly concerned about the safety risks.”

The Murkiness of the Present

The absence of a clear regulatory structure creates two core challenges for the health care sector. First, there is no well-articulated testing process for these new technologies. While drugs go through FDA approval, for instance, AI tools are simply tested by the companies and developers that create them.

“Everyone is racing to be first in this area,” Mello says. “If we’re moving quickly from innovation to dissemination, then this poses risk.”

Second, in the absence of meaningful formal regulation, it largely falls to the courts to define inappropriate use of these new technologies. The uncertainty surrounding liability for patient harms could be problematic for hospitals. Six out of 10 Americans—the potential jurors who will decide many lawsuits—are uncomfortable with AI in health care; harms are often covered in the media, which poses reputational concerns; and the judges who oversee these cases rarely have a clear understanding of how AI tools work.

Recommendations for Managing Risk

As they consider whether and how to deploy AI tools, hospitals should be balancing the specific risks of a given tool against its potential benefits, while also developing frameworks for managing risk more universally.

When it comes to specific technologies, “hospitals need to start by asking how likely is the output to be wrong and how wrong might it be—that is, the likelihood and size of the error,” Mello says. Of particular concern would be products with high potential to cause harm along either of these dimensions, especially if the harm occurs in cases where outcomes are life and death or the patient population is very fragile.

Also relevant are the ease with which a model can be explained in court and the degree to which humans are involved in decision-making. Counterintuitively, it’s likely that a poorly performing model that is opaque in its operations is, in the end, less likely to generate a lawsuit than a better performing model that is easy to understand because, for the former, it’s hard for attorneys to prove the algorithm caused the bad outcome. Likewise, AI tools that rely on people somewhere in the loop may be more likely to result in liability for hospitals and health care practitioners, because the error may be connected to that human/computer interaction.

When considering liability more globally, Mello had four recommendations for hospitals. First, they should focus their most intensive monitoring plans on the highest-risk technologies, stepping down the intensity of oversight as the risk of the technology gets lower. Second, and related, hospitals need to be fastidious about documenting precise details of the tool they’ve deployed, like which model version it is and which software package it’s using.

Third, hospitals should “take advantage of the fact that things are good in the AI market for health care right now,” Mello says. “There are lots of vendors that want to sell, often in exchange for patient data, and this puts hospitals in a great position to bargain over the terms.” What this looks like varies by technology, but one important practice is using licensing contracts to ensure that AI developers shoulder their fair share of liability; another is contracting around any disclaimers issued by the developer that have the effect of shifting liability to users.

Finally, hospitals should give thought to whether use of particular AI tools should be disclosed to patients. Doctors and patients may have very different perceptions about what level of disclosure is appropriate. Patients who feel they weren’t adequately informed can layer claims for breach of informed consent on top of medical malpractice claims.

In talking through these concerns, Mello made clear that the stakes of this conversation extend well beyond health care legal dockets and hospital boardrooms. The implications touch the general marketplace for new health care technologies—and, subsequently, the world we all occupy as patients.

“This matters to developers, as uncertainty about downside risk affects the cost of capital, which affects the kinds of innovations that reach the public and the prices attached to them—and therefore who adopts and benefits from them,” Mello says. “This is far more than a lawyer’s concern.”

Watch the seminar.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Share
Link copied to clipboard!
Contributor(s)
Dylan Walsh

Related News

What Your Phone Knows Could Help Scientists Understand Your Health
Katharine Miller
Mar 04, 2026
News
Woman using social media microblogging app on her smart phone

Stanford scientists have released an open-source platform that lets health researchers study the “screenome” – the digital traces of our daily lives – while protecting participants’ privacy.

News
Woman using social media microblogging app on her smart phone

What Your Phone Knows Could Help Scientists Understand Your Health

Katharine Miller
HealthcareMar 04

Stanford scientists have released an open-source platform that lets health researchers study the “screenome” – the digital traces of our daily lives – while protecting participants’ privacy.

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform
Dylan Walsh
Mar 03, 2026
News

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.

News

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform

Dylan Walsh
Computer VisionHealthcareSciences (Social, Health, Biological, Physical)Machine LearningMar 03

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.

From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems
Nikki Goth Itoi
Feb 27, 2026
News

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.

News

From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems

Nikki Goth Itoi
Generative AIHealthcarePrivacy, Safety, SecurityComputer VisionSciences (Social, Health, Biological, Physical)Feb 27

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.