Skip to main content Skip to secondary navigation
Page Content

Policy Brief

Image
Policy Brief
February 8, 2024

Understanding Liability Risk from Healthcare AI

Michelle M. Mello, Neel Guha 

This brief explores the legal liability risks of healthcare AI tools by analyzing the challenges courts face in dealing with patient injury caused by defects in AI or software systems.

Key Takeaways

➜ Optimism about AI’s tremendous potential to transform healthcare is tempered by concerns about legal liability: Who will be held responsible when the use of AI tools contributes to patient injury?

➜ Case law on physical injury caused by AI or software systems is sparse. Our analysis of 51 such cases revealed that liability claims generally relate to harm caused by defects in software used to manage care or resources, physicians’ use of software in making care decisions, or the malfunctioning of software embedded in medical devices.

➜ The intangible and opaque nature of software and AI models poses significant challenges for holding software developers liable according to traditional rules governing product liability. Until tort doctrine evolves to address the impact of AI, plaintiffs may struggle to assert, let alone win, their legal claims.

➜ We provide a risk assessment framework that will help healthcare organizations calibrate their approach to implementing and monitoring healthcare AI tools based on a careful assessment of the liability risk of each tool. Regulating healthcare AI should also take into account the different degrees of risk of harm.

➜ Carefully negotiating licensing agreements with AI developers is an important avenue for healthcare organizations to mitigate liability risk.

Executive Summary

Artificial intelligence (AI) holds tremendous potential to transform healthcare. But even amid vast opportunities to improve patient care and reduce costs, grave concerns about the wide-ranging risks of adopting AI tools persist. Attorneys worry about liability and litigation implications for healthcare organizations, which must comply with evolving federal laws. Perhaps the most pressing legal question is: Who will be held responsible when AI tools contribute to patient injury?

Perceptions about liability risk will influence physicians’ and healthcare organizations’ willingness to use AI tools. Outsized liability concerns can lead to conservative decision-making regarding AI innovation and adoption. In the past, older forms of clinical decision support—such as software to manage patient care and improve patient safety—have enabled healthcare organizations to prevent injuries and malpractice claims. In that sense, not adopting new technological tools could eventually be viewed as a harmful decision.

In our paper, “Understanding Liability Risk from Using Healthcare Artificial Intelligence Tools,” we examined the challenges courts face in dealing with cases involving software errors. We further analyzed how AI tools can increase or mitigate legal risk before concluding with several risk-management recommendations for healthcare organizations, focusing on AI applications that have a “human in the loop.” Our research will support healthcare organizations, physicians, patients, and policymakers as they weigh the potential benefits against the liability risks of AI use in medicine, while helping them navigate evolving liability issues to ensure the safe adoption of AI tools.

Introduction

There is sparse case law pertaining to AI-related liability in healthcare. Medical AI models are still relatively new and few personal injury claims have led to judicial opinions. In the software liability cases that have been decided to date, plaintiffs have grappled with a variety of challenges.

Typically, when a product injures a patient, courts look to well-established rules to determine how to allocate liability between the party using the product and the company that made it. The plaintiff must show that the defendant owed them a “duty of care,” that the defendant’s conduct fell below the “standard of care,” and that this violation caused the injury. But making these determinations is much more complicated for AI and other software tools applied in healthcare settings.

Because software is not a tangible object, courts have been reluctant to apply product liability doctrines to AI-related injury claims. The doctrine of “preemption,” meanwhile, prevents patients from making personal injury claims in state courts related to certain devices that have already been cleared by the Food and Drug Administration. Additionally, most states require that patients suing a product manufacturer demonstrate that there is a reasonable, safer, alternative design and that injury was foreseeable. Meeting these demands is technically difficult, given plaintiffs’ limited ability to see into the “black box” of AI systems. Finally, plaintiffs suing physicians must show the decision to follow or depart from model predictions was “unreasonable.” The tendency for models to perform well on some patient populations but not others—in addition to general problems of opacity—makes it difficult to prove that errors were reasonably foreseeable.

Tort law, which apportions liability for injury or loss, has a history of evolving to adapt to technological changes—and it will here too. To better understand the current state of play and the continuing evolution of liability risk for healthcare AI, we reviewed 803 court cases and studied the salient issues addressed in 51 judicial decisions related to physical injuries from AI and other software (in both health- and non-health-related contexts).

Read the full brief    View all Policy Publications

 

Authors