In the mid-2010s when the quantified-self movement was gaining momentum, Shriti Raj, now a HAI faculty fellow and an assistant professor (research) in Stanford’s Center for Biomedical Informatics Research, became a self-tracking enthusiast. She was drawn to the idea that by gathering data from wearable devices, she could find opportunities to improve her body and her mind.
This interest also merged with her desire to make data more useful in healthcare. She saw an opportunity to improve care for people with chronic conditions using data not only from smartwatches and exercise trackers, but also from medical wearables worn by people with diabetes or heart disease.
As she immersed herself in the field, she learned that the vast quantity of data collected by wearables is underutilized in healthcare because clinicians only see their patients periodically and patients themselves can be overwhelmed by data or don’t know how to interpret and act on the data their devices give them.
That’s a problem Raj wants to solve. Because data is so telling about whether people with chronic diseases are achieving their goals, she’s doing what she can to make that data more useful by designing better tools. The goal: to nudge people toward healthier lives.
Here, Raj speaks about the roots of her interest in data-driven healthcare, the opportunities and challenges of the field, and her hopes for her HAI faculty fellowship.
How did you develop an interest in data-driven healthcare?
My interest began while I was working at Goldman Sachs as a software developer shortly after completing my undergraduate degree. We were developing applications that enabled portfolio managers, traders, or investment strategists to review U.S. market data, which would then inform their trade decisions. There were times when the collaboration between users and developers would not pan out as you’d hope. And I thought: There must be a better way for people to understand each other.
It was that missing link that led me to pursue a degree in informatics, where I learned not just more about computer science as a field, but also about the user side, including human-computer interaction (HCI), which is now one of my specialties. I see myself as a human-centered computing researcher, which means I understand not only how to compute something, but also how to put the computation in a specific context and make it work for users.
I also come from a family of clinicians. Growing up, everyone around me was a doctor. So, I chose to work with health data because helping people work toward improved health has always been a cause close to my heart.
How did you land here at HAI as a faculty fellow, and what do you hope to accomplish in that role?
The HAI faculty fellow position, with a joint appointment as an assistant professor of medicine in Stanford’s Center for Biomedical Informatics, offers me a place where I can work closely with experts not only in computer science and AI, but also in the medical field. It also offers access to patients and providers who can test the data and computing tools I’m working on, which will give me deeper insight into the problems they face that form the basis of my research.
The AI movement is going to touch everyone’s lives sooner or later, and I see a huge potential for AI and machine learning to help people who are living with chronic health conditions. These are people whose lives are already very difficult, so they need tools that can sift through their health data to gather intelligence and then deliver it to them in a way that allows them to act on it or better work with their clinicians. In doing that, we need to go from providing data and a bit of insight that the computer dumped out to helping people understand why that insight is important, what led to it, and how they can apply it in their lives. We also need to find ways in which data-based insights can help trigger patient engagement with other useful resources that can drive better health outcomes.
So, at HAI I am going to look at identifying some principles and methods for integrating computing into the lives of people such that it helps them act instead of just becoming more and more aware of problematic health data and feeling burdened by that awareness.
Can you point to examples of how you might use AI to make health data more useful to patients and clinicians?
A recent project I worked on helped patients with Type 1 Diabetes (T1D) better understand their high blood sugar events by presenting them in a narrative way – for example, by framing an event as having occurred after the person ate too many meals in quick succession, or after their carbohydrate intake jumped. That approach gave users helpful information they could use in the future.
A second project I worked on helped clinicians make sense of T1D patients’ continuous glucose monitor and insulin pump data.
In both projects, we simulated the outputs that an intelligent system could generate and then designed interfaces where patients or clinicians could interact with those outputs to render them applicable for their needs. We then evaluated whether the interface improved patients’ or clinicians’ understanding of the data, and whether the system helped them make higher quality decisions.
This work is not about automating healthcare decisions. It is inclined more toward using AI to generate intelligence that is otherwise hard for people to generate on their own, and then almost always letting people remain in charge of the decisions that have to be made. It’s about reducing the cognitive load of engaging with data and the cognitive load of making decisions, especially when you’re making healthcare decisions many times a day just to remain alive, as people with Type 1 Diabetes do.
What are the opportunities for AI in healthcare that most excite you?
On the clinical side, I’m excited by opportunities to design decision support tools that live up to their promise. AI might seem to offer helpful information, but too often, when these systems are deployed, they do so in an annoying, irrelevant, or yawn-inducing way. We need to understand how to perform rich evaluations of clinical decision support tools in the lab so that we know how they might fail even before they are deployed in the real world.
That’s one of the mysteries I’m trying to work toward solving.
On the patients’ side, I am keen to explore how people’s health data can be used to create and deliver personalized educational messages.
What are the challenges for making data-driven healthcare a reality?
The first challenge is that the people who are building AI models for healthcare are not necessarily clinicians or healthcare experts. This leads to a feasibility versus usefulness gap. Just because something is feasible using AI – just because you can mine the data and get something out of it – does not mean that it’s useful or beneficial to patients or providers. And just because something is useful does not mean that it’s feasible to do computationally. This is a big gap, but it can be at least partly crossed by evaluating AI models on domain specific tasks, and by having a better way for clinicians to inject their expertise into the models and systems around those models to make them fit their needs.
The second challenge is the slow-moving nature of large healthcare systems. The concept of rapid iteration where you’re able to push things out into production in very short cycles is not possible with healthcare systems. You can’t test things and “fail early” when you’re dealing with patient health. Instead, you have to fail early in the lab where everything looks golden until you test it in the wild.
The third challenge is around the attitudes of human stakeholders. People feel genuinely threatened by AI because of the whole debate about whether AI will deskill people or will replace them in the workforce. This is true even for clinicians. Good computer vision systems that can diagnose breast cancer based on a scan might make pathologists worry about their job security.
And a related challenge is AI literacy. We need people to understand that AI, at least for the foreseeable future, will work very well in an augmentative role rather than by replacing skilled workers. I think we need to be having more conversations around that with people whose work could be affected by AI so they think about how to leverage AI without compromising their own value as a human resource.
Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.