Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Equitable Implementation of a Precision Digital Health Program for Glucose Management in Individuals with Newly Diagnosed Type 1 Diabetes | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
research

Equitable Implementation of a Precision Digital Health Program for Glucose Management in Individuals with Newly Diagnosed Type 1 Diabetes

Date
July 30, 2024
Topics
Healthcare
Sciences (Social, Health, Biological, Physical)
Your browser does not support the video tag.
Read Paper
abstract

Few young people with type 1 diabetes (T1D) meet glucose targets. Continuous glucose monitoring improves glycemia, but access is not equitable. We prospectively assessed the impact of a systematic and equitable digital-health-team-based care program implementing tighter glucose targets (HbA1c < 7%), early technology use (continuous glucose monitoring starts <1 month after diagnosis) and remote patient monitoring on glycemia in young people with newly diagnosed T1D enrolled in the Teamwork, Targets, Technology, and Tight Control (4T Study 1). Primary outcome was HbA1c change from 4 to 12 months after diagnosis; the secondary outcome was achieving the HbA1c targets. The 4T Study 1 cohort (36.8% Hispanic and 35.3% publicly insured) had a mean HbA1c of 6.58%, 64% with HbA1c < 7% and mean time in the range (70-180 mg dl-1) of 68% at 1 year after diagnosis. Clinical implementation of the 4T Study 1 met the prespecified primary outcome and improved glycemia without unexpected serious adverse events. The strategies in the 4T Study 1 can be used to implement systematic and equitable care for individuals with T1D and translate to care for other chronic diseases.

Share
Link copied to clipboard!
Authors
  • Priya Prahalad
    Priya Prahalad
  • David Scheinker
    David Scheinker
  • Manisha Desai
  • Victoria Y Ding
  • Franziska K Bishop
  • Ming Yeh Lee
  • Johannes Ferstad
  • Dessi P Zaharieva
  • Ananta Addala
  • Ramesh Johari
    Ramesh Johari
  • Korey Hood
  • David Maahs
    David Maahs
Related
  • Active
    Hoffman-Yee Research Grants
    Open. Letters of Intent due on January 28, 2026.

    The Hoffman-Yee Research Grants are designed to address significant scientific, technical, or societal challenges requiring an interdisciplinary team and a bold approach.

    These grants are made possible by a gift from philanthropists Reid Hoffman and Michelle Yee.

Related Publications

The AI Arms Race In Health Insurance Utilization Review: Promises Of Efficiency And Risks Of Supercharged Flaws
Michelle Mello, Artem Trotsyuk, Abdoul Jalil Djiberou Mahamadou, Danton Char
Quick ReadJan 06, 2026
Research
Your browser does not support the video tag.

Health insurers and health care provider organizations are increasingly using artificial intelligence (AI) tools in prior authorization and claims processes. AI offers many potential benefits, but its adoption has raised concerns about the role of the “humans in the loop,” users’ understanding of AI, opacity of algorithmic determinations, underperformance in certain tasks, automation bias, and unintended social consequences. To date, institutional governance by insurers and providers has not fully met the challenge of ensuring responsible use. However, several steps could be taken to help realize the benefits of AI use while minimizing risks. Drawing on empirical work on AI use and our own ethical assessments of provider-facing tools as part of the AI governance process at Stanford Health Care, we examine why utilization review has attracted so much AI innovation and why it is challenging to ensure responsible use of AI. We conclude with several steps that could be taken to help realize the benefits of AI use while minimizing risks.

Research
Your browser does not support the video tag.

The AI Arms Race In Health Insurance Utilization Review: Promises Of Efficiency And Risks Of Supercharged Flaws

Michelle Mello, Artem Trotsyuk, Abdoul Jalil Djiberou Mahamadou, Danton Char
HealthcareRegulation, Policy, GovernanceQuick ReadJan 06

Health insurers and health care provider organizations are increasingly using artificial intelligence (AI) tools in prior authorization and claims processes. AI offers many potential benefits, but its adoption has raised concerns about the role of the “humans in the loop,” users’ understanding of AI, opacity of algorithmic determinations, underperformance in certain tasks, automation bias, and unintended social consequences. To date, institutional governance by insurers and providers has not fully met the challenge of ensuring responsible use. However, several steps could be taken to help realize the benefits of AI use while minimizing risks. Drawing on empirical work on AI use and our own ethical assessments of provider-facing tools as part of the AI governance process at Stanford Health Care, we examine why utilization review has attracted so much AI innovation and why it is challenging to ensure responsible use of AI. We conclude with several steps that could be taken to help realize the benefits of AI use while minimizing risks.

AI, Health, and Health Care Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence
Tina Hernandez-Boussard, Michelle Mello, Nigam Shah, Co-authored by 50+ experts
Deep DiveOct 13, 2025
Research
Your browser does not support the video tag.
Research
Your browser does not support the video tag.

AI, Health, and Health Care Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence

Tina Hernandez-Boussard, Michelle Mello, Nigam Shah, Co-authored by 50+ experts
HealthcareRegulation, Policy, GovernanceDeep DiveOct 13
Automated real-time assessment of intracranial hemorrhage detection AI using an ensembled monitoring model (EMM)
Zhongnan Fang, Andrew Johnston, Lina Cheuy, Hye Sun Na, Magdalini Paschali, Camila Gonzalez, Bonnie Armstrong, Arogya Koirala, Derrick Laurel, Andrew Walker Campion, Michael Iv, Akshay Chaudhari, David B. Larson
Deep DiveOct 13, 2025
Research
Your browser does not support the video tag.

Artificial intelligence (AI) tools for radiology are commonly unmonitored once deployed. The lack of real-time case-by-case assessments of AI prediction confidence requires users to independently distinguish between trustworthy and unreliable AI predictions, which increases cognitive burden, reduces productivity, and potentially leads to misdiagnoses. To address these challenges, we introduce Ensembled Monitoring Model (EMM), a framework inspired by clinical consensus practices using multiple expert reviews. Designed specifically for black-box commercial AI products, EMM operates independently without requiring access to internal AI components or intermediate outputs, while still providing robust confidence measurements. Using intracranial hemorrhage detection as our test case on a large, diverse dataset of 2919 studies, we demonstrate that EMM can successfully categorize confidence in the AI-generated prediction, suggest appropriate actions, and help physicians recognize low confidence scenarios, ultimately reducing cognitive burden. Importantly, we provide key technical considerations and best practices for successfully translating EMM into clinical settings.

Research
Your browser does not support the video tag.

Automated real-time assessment of intracranial hemorrhage detection AI using an ensembled monitoring model (EMM)

Zhongnan Fang, Andrew Johnston, Lina Cheuy, Hye Sun Na, Magdalini Paschali, Camila Gonzalez, Bonnie Armstrong, Arogya Koirala, Derrick Laurel, Andrew Walker Campion, Michael Iv, Akshay Chaudhari, David B. Larson
HealthcareRegulation, Policy, GovernanceDeep DiveOct 13

Artificial intelligence (AI) tools for radiology are commonly unmonitored once deployed. The lack of real-time case-by-case assessments of AI prediction confidence requires users to independently distinguish between trustworthy and unreliable AI predictions, which increases cognitive burden, reduces productivity, and potentially leads to misdiagnoses. To address these challenges, we introduce Ensembled Monitoring Model (EMM), a framework inspired by clinical consensus practices using multiple expert reviews. Designed specifically for black-box commercial AI products, EMM operates independently without requiring access to internal AI components or intermediate outputs, while still providing robust confidence measurements. Using intracranial hemorrhage detection as our test case on a large, diverse dataset of 2919 studies, we demonstrate that EMM can successfully categorize confidence in the AI-generated prediction, suggest appropriate actions, and help physicians recognize low confidence scenarios, ultimately reducing cognitive burden. Importantly, we provide key technical considerations and best practices for successfully translating EMM into clinical settings.

Developing mental health AI tools that improve care across different groups and contexts
Nicole Martinez-Martin
Deep DiveOct 10, 2025
Research
Your browser does not support the video tag.

In order to realize the potential of mental health AI applications to deliver improved care, a multipronged approach is needed, including representative AI datasets, research practices that reflect and anticipate potential sources of bias, stakeholder engagement, and equitable design practices.

Research
Your browser does not support the video tag.

Developing mental health AI tools that improve care across different groups and contexts

Nicole Martinez-Martin
HealthcareRegulation, Policy, GovernanceDeep DiveOct 10

In order to realize the potential of mental health AI applications to deliver improved care, a multipronged approach is needed, including representative AI datasets, research practices that reflect and anticipate potential sources of bias, stakeholder engagement, and equitable design practices.