Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Research | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

Research

We enable top minds in AI to study, guide, and develop human-centered AI designed to collaborate with and augment human capabilities.

Recently Published in Research Publications

See all Publications
The AI Arms Race In Health Insurance Utilization Review: Promises Of Efficiency And Risks Of Supercharged Flaws
Michelle Mello, Artem Trotsyuk, Abdoul Jalil Djiberou Mahamadou, Danton Char
Quick ReadJan 06, 2026
Research
Your browser does not support the video tag.

Health insurers and health care provider organizations are increasingly using artificial intelligence (AI) tools in prior authorization and claims processes. AI offers many potential benefits, but its adoption has raised concerns about the role of the “humans in the loop,” users’ understanding of AI, opacity of algorithmic determinations, underperformance in certain tasks, automation bias, and unintended social consequences. To date, institutional governance by insurers and providers has not fully met the challenge of ensuring responsible use. However, several steps could be taken to help realize the benefits of AI use while minimizing risks. Drawing on empirical work on AI use and our own ethical assessments of provider-facing tools as part of the AI governance process at Stanford Health Care, we examine why utilization review has attracted so much AI innovation and why it is challenging to ensure responsible use of AI. We conclude with several steps that could be taken to help realize the benefits of AI use while minimizing risks.

The AI Arms Race In Health Insurance Utilization Review: Promises Of Efficiency And Risks Of Supercharged Flaws

Michelle Mello, Artem Trotsyuk, Abdoul Jalil Djiberou Mahamadou, Danton Char
Quick ReadJan 06, 2026

Health insurers and health care provider organizations are increasingly using artificial intelligence (AI) tools in prior authorization and claims processes. AI offers many potential benefits, but its adoption has raised concerns about the role of the “humans in the loop,” users’ understanding of AI, opacity of algorithmic determinations, underperformance in certain tasks, automation bias, and unintended social consequences. To date, institutional governance by insurers and providers has not fully met the challenge of ensuring responsible use. However, several steps could be taken to help realize the benefits of AI use while minimizing risks. Drawing on empirical work on AI use and our own ethical assessments of provider-facing tools as part of the AI governance process at Stanford Health Care, we examine why utilization review has attracted so much AI innovation and why it is challenging to ensure responsible use of AI. We conclude with several steps that could be taken to help realize the benefits of AI use while minimizing risks.

Healthcare
Regulation, Policy, Governance
Your browser does not support the video tag.
Research
The Global AI Vibrancy Tool 2025
Loredana Fattorini, Nestor Maslej, Ray Perrault, Vanessa Parli, John Etchemendy, Yoav Shoham, Katrina Ligett
Deep DiveNov 24, 2025
Research
Your browser does not support the video tag.

This methodological paper presents the Global AI Vibrancy Tool, an interactive suite of visualizations designed to facilitate cross-country comparisons of AI vibrancy across countries, using indicators organized into pillars. The tool offers customizable features that enable users to conduct in-depth country-level comparisons and longitudinal analyses of AI-related metrics.

The Global AI Vibrancy Tool 2025

Loredana Fattorini, Nestor Maslej, Ray Perrault, Vanessa Parli, John Etchemendy, Yoav Shoham, Katrina Ligett
Deep DiveNov 24, 2025

This methodological paper presents the Global AI Vibrancy Tool, an interactive suite of visualizations designed to facilitate cross-country comparisons of AI vibrancy across countries, using indicators organized into pillars. The tool offers customizable features that enable users to conduct in-depth country-level comparisons and longitudinal analyses of AI-related metrics.

Democracy
Industry, Innovation
Government, Public Administration
Your browser does not support the video tag.
Research
AI, Health, and Health Care Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence
Tina Hernandez-Boussard, Michelle Mello, Nigam Shah, Co-authored by 50+ experts
Deep DiveOct 13, 2025
Research
Your browser does not support the video tag.

AI, Health, and Health Care Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence

Tina Hernandez-Boussard, Michelle Mello, Nigam Shah, Co-authored by 50+ experts
Deep DiveOct 13, 2025
Healthcare
Regulation, Policy, Governance
Your browser does not support the video tag.
Research
Automated real-time assessment of intracranial hemorrhage detection AI using an ensembled monitoring model (EMM)
Zhongnan Fang, Andrew Johnston, Lina Cheuy, Hye Sun Na, Magdalini Paschali, Camila Gonzalez, Bonnie Armstrong, Arogya Koirala, Derrick Laurel, Andrew Walker Campion, Michael Iv, Akshay Chaudhari, David B. Larson
Deep DiveOct 13, 2025
Research
Your browser does not support the video tag.

Artificial intelligence (AI) tools for radiology are commonly unmonitored once deployed. The lack of real-time case-by-case assessments of AI prediction confidence requires users to independently distinguish between trustworthy and unreliable AI predictions, which increases cognitive burden, reduces productivity, and potentially leads to misdiagnoses. To address these challenges, we introduce Ensembled Monitoring Model (EMM), a framework inspired by clinical consensus practices using multiple expert reviews. Designed specifically for black-box commercial AI products, EMM operates independently without requiring access to internal AI components or intermediate outputs, while still providing robust confidence measurements. Using intracranial hemorrhage detection as our test case on a large, diverse dataset of 2919 studies, we demonstrate that EMM can successfully categorize confidence in the AI-generated prediction, suggest appropriate actions, and help physicians recognize low confidence scenarios, ultimately reducing cognitive burden. Importantly, we provide key technical considerations and best practices for successfully translating EMM into clinical settings.

Automated real-time assessment of intracranial hemorrhage detection AI using an ensembled monitoring model (EMM)

Zhongnan Fang, Andrew Johnston, Lina Cheuy, Hye Sun Na, Magdalini Paschali, Camila Gonzalez, Bonnie Armstrong, Arogya Koirala, Derrick Laurel, Andrew Walker Campion, Michael Iv, Akshay Chaudhari, David B. Larson
Deep DiveOct 13, 2025

Artificial intelligence (AI) tools for radiology are commonly unmonitored once deployed. The lack of real-time case-by-case assessments of AI prediction confidence requires users to independently distinguish between trustworthy and unreliable AI predictions, which increases cognitive burden, reduces productivity, and potentially leads to misdiagnoses. To address these challenges, we introduce Ensembled Monitoring Model (EMM), a framework inspired by clinical consensus practices using multiple expert reviews. Designed specifically for black-box commercial AI products, EMM operates independently without requiring access to internal AI components or intermediate outputs, while still providing robust confidence measurements. Using intracranial hemorrhage detection as our test case on a large, diverse dataset of 2919 studies, we demonstrate that EMM can successfully categorize confidence in the AI-generated prediction, suggest appropriate actions, and help physicians recognize low confidence scenarios, ultimately reducing cognitive burden. Importantly, we provide key technical considerations and best practices for successfully translating EMM into clinical settings.

Healthcare
Regulation, Policy, Governance
Your browser does not support the video tag.
Research

2025-2026 Applications For Fellowships Are Open

The Institute aims to appoint and support promising researchers through its fellowship programs

Learn more about Fellowships

Learn more about Grants

The 2025 AI Index Is Here

New in this year’s report are in-depth analyses of the evolving landscape of AI hardware, novel estimates of inference costs, and new analyses of AI publication and patenting trends. We also introduce fresh data on corporate adoption of responsible AI practices, along with expanded coverage of AI’s growing role in science and medicine.

Read the Report

Discover Student Affinity Groups

Latest News in Research

Get Involved

Become HAI Affiliated Faculty

Learn more about our Faculty Affiliate program. Stanford faculty are encouraged to participate.

Student Research Opportunities

View research opportunities across HAI's programs, centers, labs, and initiatives.

Your browser does not support the video tag.

Guidelines for HAI sponsorship of your affinity group

Do you have ideas on advancing AI to improve the human condition? You’re invited to apply.
announcement
Your browser does not support the video tag.

Stanford HAI Selects 12 New Student Affinity Groups

Nov 20

This year, affinity group topics include accessibility for individuals with disabilities, artistic creation, education, healthcare, journalism, workforce productivity, and more. 

news
Two smiling HAI Fellows sitting on steps

Building the Next Generation of AI Scholars

Beth Jensen
Education, SkillsJul 12

A cross-disciplinary group of Stanford students explores fresh approaches to human-centered AI.

Active Grants

The Stanford Institute for Human-Centered AI strives to foster a culture of interdisciplinary AI research in which technological advancements are inextricably linked to research about their potential societal impacts.

Learn more about HAI Grants

Active

HAI and Wu Tsai Neuro Partnership Grant

Open. Applications due on January 16, 2026.

Stanford HAI and the Wu Tsai Neurosciences Institute jointly seek proposals that transform our understanding of the human brain using AI and advance the development of intelligent technology.

Active

Hoffman-Yee Research Grants

Open. Letters of Intent due on January 28, 2026.

The Hoffman-Yee Research Grants are designed to address significant scientific, technical, or societal challenges requiring an interdisciplinary team and a bold approach.

These grants are made possible by a gift from philanthropists Reid Hoffman and Michelle Yee.

News
illustration of data and lines

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test

Andrew Myers
Foundation ModelsGenerative AIPrivacy, Safety, SecurityFeb 02

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

AI Can’t Do Physics Well – And That’s a Roadblock to Autonomy
Andrew Myers
Jan 26
News
breaking of pool balls on a pool table

QuantiPhy is a new benchmark and training framework that evaluates whether AI can numerically reason about physical properties in video images. QuantiPhy reveals that today’s models struggle with basic estimates of size, speed, and distance but offers a way forward.

Stanford HAI and Swiss National AI Institute Form Alliance to Advance Open, Human-Centered AI
Jan 22
Announcement
Your browser does not support the video tag.

Stanford, ETH Zurich, and EPFL will develop open-source foundation models that prioritize societal values over commercial interests, strengthening academia's role in shaping AI's future.

AI Reveals How Brain Activity Unfolds Over Time
Andrew Myers
Jan 21
News
Medical Brain Scans on Multiple Computer Screens. Advanced Neuroimaging Technology Reveals Complex Neural Pathways, Display Showing CT Scan in a Modern Medical Environment

Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease.

Vital Set Of Policy Recommendations For Stridently Dealing With AI That Provides Mental Health Advice
Forbes
Dec 11
Media Mention

Forbes Columnist Lance Elliot describes Stanford HAI's recent response to the FDA’s RFC, which focused on policy recommendations for mental health and AI.