Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
How AI Can Augment Health Care: Trends To Watch | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

How AI Can Augment Health Care: Trends To Watch

Date
April 01, 2021
Topics
Healthcare
Machine Learning
REUTERS/Alvin Baez

The HAI spring conference examines how technology can support home caregivers, focus on more proactive care, gain medical-provider trust, and assist in early childhood development.

Home-based health care became a hallmark of the pandemic during global lockdowns. But the need for it long predated the challenges COVID-19 posed.

In this context, “immersive” health care efforts — home-based and mobile health technologies designed to keep people healthy outside of the clinic — could be a boon to patients and their caregivers.

“Our vision of immersive care is about endowing human caregivers with ‘superpowers’ — through augmentation, not replacement, to upskill rather than deskill,” said Cornell associate dean and professor of computer science Deborah Estrin during HAI’s Spring Conference, “Intelligence Augmentation: Using AI To Solve Global Problems.”

As part of HAI’s spring conference, Estrin joined other health experts and scholars in education and arts to explain AI’s ability to augment — not replace — critical human work. During the health panel, experts discussed advances of AI in the home, proactive and preventive care, and faster diagnostics, particularly in early childhood development. (Watch the full conference here.)

In-Home Health Care

Home-based caregivers can “make or break health care outcomes,” Estrin said, and most put in long hours, often for low compensation, while dealing with wide-ranging health-related and other issues. Here, AI can deliver promising offerings.

For example, a caregiver assisting a stroke patient at home can access a specialist clinician remotely for guidance assisted by virtual-reality technologies, such as detailed visual information overlaid on the patient’s body, while an AI-based agent learns alongside the caregiver to provide automated future assistance with diagnosis and treatment.

But that vision involves overcoming several hurdles, including helping AI agents learn from limited data and ensuring security and privacy for home-based patients.

Values must guide these immersive-care technologies, especially in a health care ecosystem where prevention is reduced to a “positive externality,” as Estrin said. “The technology must conform to the Hippocratic oath: ‘First do no harm,’ ” she said. “It’s about preserving or increasing autonomy for patients and respecting and supporting caregivers, while reducing costs.”

From Reactive to Proactive Care

AI can also move us from reactive to proactive health care, multiple speakers noted.

Microsoft Chief Scientific Ifficer Eric Horvitz’s team has created AI-based technologies in this space. One, the readmissions manager tool, predicts whether patients will return to hospitals within 30 days of release. The tool was designed to help physicians allocate special support during the original stay. To improve movement from data to predictions to actions, Horvitz said, they built a new system that coupled machine learning (ML) with automated decision analysis; it considers intervention cost and likelihood of success and creates visualizations to help physicians understand system outputs and gain insights.

A separate project studied cognitive errors, which account for up to 400,000 deaths per year. The team studied 15 years of visits to a large hospital emergency department to understand patterns in “failure to rescue,” or when clinicians miss potentially dangerous diagnosable conditions. They were able to train an AI model to predict severe clinical events successfully, improving outcomes.

“AI could provide a safety net for these mistakes,” Horvitz said.

The greatest opportunity in this space is in complementarity, or “weaving together human and machine intellect,” Horvitz said. The Camelyon Grand Challenge, for example, showed human experts had a 3.5 percent error rate in identifying metastatic breast cancer in lymph-node sections, but that was reduced to 0.5 percent when combined with AI. Similarly, a machine learning system trained to ask for human input yielded a 0 percent error rate in an imaged anatomical region highly prone to human error.

Building Trust in the Machine

Suchi Saria, Johns Hopkins associate professor of computer science, addressed the challenges in incorporating AI in a physician’s daily work.

“Today’s care is reactive, overwhelms providers with technology, and yields lots of medical waste,” she said. Her work through her university and company (Bayesian Health) seeks largely to improve physician trust in new technologies.

For example, one study showed that clinicians were much less likely to trust diagnoses based on chest X-rays when they believed the outputs were AI-based — even though all diagnoses were actually provided by humans.

“The non-negotiable to tackle human bias against AI is high-quality machine learning,” Saria said. That requires high-quality inputs, targets, and learners, which are hard to come by in health care. For example, using higher-quality disease information (such as severity, rather than broad billing codes, a common proxy today) improves machine-learning predictions significantly.

Still, simply giving clinicians models or data won’t necessarily drive adoption and behavior change. Saria’s work shows that activating buy-in requires usable insights and systems with friendlier interfaces, ultimately reducing the medical communities’ anti-AI bias.

AI and Healthy Child Development

The session’s final speaker, Dennis Wall, Stanford associate professor of pediatrics, focused on AI’s role in addressing multiple aspects of child development globally.

He points to the example of autism, which affects 1 in 40 children today: “There’s a disconnect between people and services. The health care ecosystem logjams as 800,000 kids are pushed through the U.S. system, waiting two-and-a-half years to receive diagnoses in some cases.”

Wall’s team employs AI-based technology to help clinicians diagnose autism based on children’s vocalizations. They used machine learning to analyze home-video data and harness a small set of features to predict clinical outcomes, empowering even non-experts to detect autism features from videos with high accuracy, reducing time to diagnosis. They’re implementing this tool in Bangladesh, the Philippines, and elsewhere.

These technologies also apply to therapy. For example, Wall’s team has built prototypes of augmented-reality glasses that help children understand and interpret the social world; the devices detect others’ emotions based on facial recognition and submit dynamic, real-time feedback to children. Cognoa has licensed some of the technology — shown to be effective in pilot trials — for commercialization.

Similarly, Wall’s team developed Guess What?, a game that helps autistic children act out emotions displayed on a phone that adults hold up to their own foreheads, as in the popular Heads Up game. This helps children gain social skills — eye contact, emotion recognition — while generating useful training data.

“We’re excited to work on other pediatric health care solutions going forward,” Wall said.

Want to learn more about how AI can augment work? Read about our conference sessions on art and education, or watch the session videos here.

 

Share
Link copied to clipboard!
Contributor(s)
Sachin Waikar
Related
  • AI in Education: Augmenting Teachers, Scaling Workplace Training
    Sachin Waikar
    Apr 01
    news

    The HAI spring conference examines how AI advancements in education can super-power teachers and rethink outdated (and bad) instruction.  

Related News

Exploring the Dangers of AI in Mental Health Care
Sarah Wells
Jun 11, 2025
News
Young woman holds up phone to her face

A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.

News
Young woman holds up phone to her face

Exploring the Dangers of AI in Mental Health Care

Sarah Wells
HealthcareGenerative AIJun 11

A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.

Digital Twins Offer Insights into Brains Struggling with Math — and Hope for Students
Andrew Myers
Jun 06, 2025
News

Researchers used artificial intelligence to analyze the brain scans of students solving math problems, offering the first-ever peek into the neuroscience of math disabilities.

News

Digital Twins Offer Insights into Brains Struggling with Math — and Hope for Students

Andrew Myers
Machine LearningSciences (Social, Health, Biological, Physical)Jun 06

Researchers used artificial intelligence to analyze the brain scans of students solving math problems, offering the first-ever peek into the neuroscience of math disabilities.

Better Benchmarks for Safety-Critical AI Applications
Nikki Goth Itoi
May 27, 2025
News
Business graph digital concept

Stanford researchers investigate why models often fail in edge-case scenarios.

News
Business graph digital concept

Better Benchmarks for Safety-Critical AI Applications

Nikki Goth Itoi
Machine LearningMay 27

Stanford researchers investigate why models often fail in edge-case scenarios.