Home-based health care became a hallmark of the pandemic during global lockdowns. But the need for it long predated the challenges COVID-19 posed.
In this context, “immersive” health care efforts — home-based and mobile health technologies designed to keep people healthy outside of the clinic — could be a boon to patients and their caregivers.
“Our vision of immersive care is about endowing human caregivers with ‘superpowers’ — through augmentation, not replacement, to upskill rather than deskill,” said Cornell associate dean and professor of computer science Deborah Estrin during HAI’s Spring Conference, “Intelligence Augmentation: Using AI To Solve Global Problems.”
As part of HAI’s spring conference, Estrin joined other health experts and scholars in education and arts to explain AI’s ability to augment — not replace — critical human work. During the health panel, experts discussed advances of AI in the home, proactive and preventive care, and faster diagnostics, particularly in early childhood development. (Watch the full conference here.)
In-Home Health Care
Home-based caregivers can “make or break health care outcomes,” Estrin said, and most put in long hours, often for low compensation, while dealing with wide-ranging health-related and other issues. Here, AI can deliver promising offerings.
For example, a caregiver assisting a stroke patient at home can access a specialist clinician remotely for guidance assisted by virtual-reality technologies, such as detailed visual information overlaid on the patient’s body, while an AI-based agent learns alongside the caregiver to provide automated future assistance with diagnosis and treatment.
But that vision involves overcoming several hurdles, including helping AI agents learn from limited data and ensuring security and privacy for home-based patients.
Values must guide these immersive-care technologies, especially in a health care ecosystem where prevention is reduced to a “positive externality,” as Estrin said. “The technology must conform to the Hippocratic oath: ‘First do no harm,’ ” she said. “It’s about preserving or increasing autonomy for patients and respecting and supporting caregivers, while reducing costs.”
From Reactive to Proactive Care
AI can also move us from reactive to proactive health care, multiple speakers noted.
Microsoft Chief Scientific Ifficer Eric Horvitz’s team has created AI-based technologies in this space. One, the readmissions manager tool, predicts whether patients will return to hospitals within 30 days of release. The tool was designed to help physicians allocate special support during the original stay. To improve movement from data to predictions to actions, Horvitz said, they built a new system that coupled machine learning (ML) with automated decision analysis; it considers intervention cost and likelihood of success and creates visualizations to help physicians understand system outputs and gain insights.
A separate project studied cognitive errors, which account for up to 400,000 deaths per year. The team studied 15 years of visits to a large hospital emergency department to understand patterns in “failure to rescue,” or when clinicians miss potentially dangerous diagnosable conditions. They were able to train an AI model to predict severe clinical events successfully, improving outcomes.
“AI could provide a safety net for these mistakes,” Horvitz said.
The greatest opportunity in this space is in complementarity, or “weaving together human and machine intellect,” Horvitz said. The Camelyon Grand Challenge, for example, showed human experts had a 3.5 percent error rate in identifying metastatic breast cancer in lymph-node sections, but that was reduced to 0.5 percent when combined with AI. Similarly, a machine learning system trained to ask for human input yielded a 0 percent error rate in an imaged anatomical region highly prone to human error.
Building Trust in the Machine
Suchi Saria, Johns Hopkins associate professor of computer science, addressed the challenges in incorporating AI in a physician’s daily work.
“Today’s care is reactive, overwhelms providers with technology, and yields lots of medical waste,” she said. Her work through her university and company (Bayesian Health) seeks largely to improve physician trust in new technologies.
For example, one study showed that clinicians were much less likely to trust diagnoses based on chest X-rays when they believed the outputs were AI-based — even though all diagnoses were actually provided by humans.
“The non-negotiable to tackle human bias against AI is high-quality machine learning,” Saria said. That requires high-quality inputs, targets, and learners, which are hard to come by in health care. For example, using higher-quality disease information (such as severity, rather than broad billing codes, a common proxy today) improves machine-learning predictions significantly.
Still, simply giving clinicians models or data won’t necessarily drive adoption and behavior change. Saria’s work shows that activating buy-in requires usable insights and systems with friendlier interfaces, ultimately reducing the medical communities’ anti-AI bias.
AI and Healthy Child Development
The session’s final speaker, Dennis Wall, Stanford associate professor of pediatrics, focused on AI’s role in addressing multiple aspects of child development globally.
He points to the example of autism, which affects 1 in 40 children today: “There’s a disconnect between people and services. The health care ecosystem logjams as 800,000 kids are pushed through the U.S. system, waiting two-and-a-half years to receive diagnoses in some cases.”
Wall’s team employs AI-based technology to help clinicians diagnose autism based on children’s vocalizations. They used machine learning to analyze home-video data and harness a small set of features to predict clinical outcomes, empowering even non-experts to detect autism features from videos with high accuracy, reducing time to diagnosis. They’re implementing this tool in Bangladesh, the Philippines, and elsewhere.
These technologies also apply to therapy. For example, Wall’s team has built prototypes of augmented-reality glasses that help children understand and interpret the social world; the devices detect others’ emotions based on facial recognition and submit dynamic, real-time feedback to children. Cognoa has licensed some of the technology — shown to be effective in pilot trials — for commercialization.
Similarly, Wall’s team developed Guess What?, a game that helps autistic children act out emotions displayed on a phone that adults hold up to their own foreheads, as in the popular Heads Up game. This helps children gain social skills — eye contact, emotion recognition — while generating useful training data.
“We’re excited to work on other pediatric health care solutions going forward,” Wall said.