Danton Char
Associate Professor of Anesthesiology, Perioperative and Pain Medicine (Pediatric)

The AI Arms Race In Health Insurance Utilization Review: Promises Of Efficiency And Risks Of Supercharged Flaws
Health insurers and health care provider organizations are increasingly using artificial intelligence (AI) tools in prior authorization and claims processes. AI offers many potential benefits, but its adoption has raised concerns about the role of the “humans in the loop,” users’ understanding of AI, opacity of algorithmic determinations, underperformance in certain tasks, automation bias, and unintended social consequences. To date, institutional governance by insurers and providers has not fully met the challenge of ensuring responsible use. However, several steps could be taken to help realize the benefits of AI use while minimizing risks. Drawing on empirical work on AI use and our own ethical assessments of provider-facing tools as part of the AI governance process at Stanford Health Care, we examine why utilization review has attracted so much AI innovation and why it is challenging to ensure responsible use of AI. We conclude with several steps that could be taken to help realize the benefits of AI use while minimizing risks.
Ethical Obligations to Inform Patients About Use of AI Tools
Permeation of artificial intelligence (AI) tools into health care tests traditional understandings of what patients should be told about their care. Despite the general importance of informed consent, decision support tools (eg, automatic electrocardiogram readers, rule-based risk classifiers, and UpToDate summaries) are not usually discussed with patients even though they affect treatment decisions. Should AI tools be treated similarly? The legal doctrine of informed consent requires disclosing information that is material to a reasonable patient’s decision to accept a health care service, and evidence suggests that many patients would think differently about care if they knew it was guided by AI. In recent surveys, 60% of US adults said they would be uncomfortable with their physician relying on AI,1 70% to 80% had low expectations AI would improve important aspects of their care,2 only one-third trusted health care systems to use AI responsibly,3 and 63% said it was very true that they would want to be notified about use of AI in their care.