Skip to main content Skip to secondary navigation
Page Content

HAI Policy Briefs

June 2022

Toward Stronger FDA Approval Standards for AI Medical Devices

As the development and adoption of AI-enabled healthcare continue to accelerate, regulators and researchers are beginning to confront oversight concerns in the clinical evaluation process that could yield negative consequences on patient health if left unchecked. Since 2015, the United States Food and Drug Administration (FDA) has evaluated and granted clearance for over 100 AI-based medical devices using a fairly rudimentary evaluation process that is in dire need of improvement as these evaluations have not been adapted to address the unique concerns surrounding AI. This brief examined this evaluation process and analyzed how devices were evaluated before approval.

Key Takeaways

Policy Brief June 2022

➜ We analyzed public records for all 130 FDA-approved medical AI devices between January 2015 and December 2020 and observed significant variety and limitations in test-data rigor and what developers considered appropriate clinical evaluation.

➜ When we performed an analysis of a well-established diagnostic task (pneumothorax, or collapsed lung) using three sets of training data, the level of error exhibited between white and Black patients increased dramatically. In one instance this meant that the drop in the rate of accuracy grew by a factor of 1.8 times and 4.5 times, respectively.

➜ To minimize risks for patient harm and disparate treatment, policymakers should set the standard of multi-site evaluations, encourage greater comparison between standard of care without AI-enabled tools with a potential use of AI tools, and mandate post-market surveillance of AI devices.

Read the full brief 

Authors

Eric Wu - Stanford University
Kevin Wu- Stanford University
Roxana Daneshjou - Stanford University
David Ouyang - Cedars-Sinai Medical Center
Daniel E. Ho - Stanford University
James Zou - Stanford University

View all Policy Briefs