Linda A. Cicero
Artificial intelligence’s remarkable ability to ingest huge amounts of data, make sense of images, and spot patterns that escape even the most-skilled human eye has inspired hope that the technology will transform medicine. Realizing the full potential of this opportunity will require the combined efforts of experts in computer science, medicine, policy, mathematics, ethics and more.
This interdisciplinary approach was the focus of conversation during the Center for Artificial Intelligence in Medicine & Imaging (AIMI) virtual conference on Aug. 5. The event, featuring world experts in computer science, medicine, industry, and government, looked at emerging clinical machine learning technologies with focused discussion around technical innovations, data ethics, policy, and regulation. The symposium was led by Matthew Lungren, Stanford associate professor of radiology and AIMI co-director, and Serena Yeung, assistant professor of biomedical data science and computer science and associate director of AIMI. The day’s discussions were organized around the theme of “consilience” — a term that describes the emergence of a new academic discipline arising when experts from different fields interact in an open discussion and form a new area.
Here are some of the takeaways of the daylong conference, or watch the conference videos.
AI’s Greatest Potential
Noted cardiologist, researcher, and author Eric Topol of Scripps Research sees three areas where AI has the greatest potential to transform medicine. The first is in reducing medical errors that lead to misdiagnosis, he said, noting a recent NYU study that showed that human pathology combined with AI was better at finding breast cancers than either was separately, particularly in reducing false-negative mammograms that delay care for many women. “This is really important stuff,” Topol said. “AI can see things the human eye can’t.”
Second, there will be a profusion of new applications that help patients self-manage their health throughout life, like smartphone apps that diagnose skin cancers. This will be the age of the “medical selfie,” Topol promises — take a picture, get a diagnosis.
Finally, AI holds the promise of “making health care human again” by bringing the physician closer to the patient. We can lessen or eliminate the drudgery of data entry that leads to doctor fatigue and steals precious time with the patient, he said. Putting humanity back into the profession will be AI’s most important contribution to medicine, he added.
Democratizing the Data
It’s not enough to simply build medical AI products. What matters is getting those products out to people. “The best product in the world doesn’t do any good if people can’t access it,” said Lily Peng, product manager at Google Brain AI Research Group. She noted as an example a recent model her team worked on to diagnose diabetic eye disease that might have been able to achieve similar results with a smaller, higher-quality dataset, potentially helping to speed a valuable product to market. “We need to bring these products closer to people.”
Stanford researcher Pranav Rajpurkar looked at the tendency for algorithms trained on proprietary or incomplete datasets to fail outside those friendly confines — that is, they don’t generalize. As one example he pointed to American-trained AI models for lung diseases that don’t include tuberculosis in their labeling. TB is a noted problem for the developing world, but less so in America, so scans of tuberculosis are not found in the training dataset. True democratization requires AI to work everywhere for everyone, he said. Simply adding images of tuberculosis to American training datasets would help generalize — and therefore democratize — valuable AI to other parts of the world.
Thoracic radiologist Gilberto Szarf discussed how democratization in his home country of Brazil means using AI to provide or speed care where specialists and resources are in short supply to treat melanoma, tuberculosis, Zika, and even COVID-19. He noted an AI model to diagnose Zika from medical images extends a valuable tool to regions of Brazil — a country almost the size of the U.S. — where quality medical care is a challenge.
AI must not underestimate the role of the regulator, said Khair El Zarrad, director of medical policy at the U.S. Food and Drug Administration (FDA). “Regulation is a collaboration,” he said. “Regulators are concerned about safety, but we also want to see good technologies get to the clinics.”
He was joined in discussion by Russell Stewart, a vice president at tele-radiology and AI startup Nines, and Hugh Harvey, a managing director at clinical digital consultancy Hardian Health. Stewart and Harvey noted the distinct regulatory differences separating algorithms from drugs and medical devices: While drugs and devices take years to develop and are not easily altered once approved, algorithms can be changed with a few keystrokes, making lasting regulation a challenge. Both panelists encouraged exacting language in describing the intended use of a product under regulatory scrutiny. Keep the scope narrowly focused and don’t overreach, cautioned Harvey. Stewart meanwhile counseled working collaboratively with regulators through the development process to help get technologies to market.
“We want to see new tools,” the FDA’s El Zarrad noted, “but we need to ensure AI enters the marketplace in a way we are all comfortable with.”
Emerging Technologies 2020
Many attendees were keen to hear what’s in the immediate offing for AI in medicine. Panelist Jeremy Howard, founding researcher at fast.ai, is focused on using AI to achieve “super-resolution.” Noting the dilemma that one can have high-resolution or fast scans, but not both, he and his group have set out to solve the problem. His process includes taking good images and making them worse — in order to train machines to make bad images good again. The result, he hopes, will be an algorithm that can turn lower-resolution, quickly recorded originals into high-quality scans.
Meanwhile, Shreyas Vasanawala, a radiologist at Stanford, is exploring “upstream AI,” or using AI to determine decisions like which tests to perform, in what order tests should proceed, who should do them, or even how the scanning device is made. All this happens “upstream,” long before an image is made. Every decision point is a chance to improve, he says.
Nafissa Yakubova of Facebook AI previewed fastMRI, which seeks to speed image acquisition and diagnostic output. A single scan can take 20 to 60 minutes and doesn’t work well on moving organs, like hearts. Faster MRIs could transform an already transformational imaging technique. She says she has achieved impressive results: scans made four-times faster than today’s MRI. Yakubova and team are partnering with NYU medical school and making public their dataset of hard-to-find raw MRIs.
Fairness for All
The concept of equity was front of mind for many speakers. Fast.ai co-founder Rachel Thomas pointed out that AI created for good can do plenty of harm when it doesn’t universally help patients. One example: wrist-worn devices that monitor heart activity using a green-light technology that doesn’t work as well for people of color. Millions wear these devices and for far too many, the data is unreliable or even misleading.
Google technical program manager Donald Martin noted that many datasets are drawn from largely white, Northern European populations or by flawed assumptions written into the algorithms’ decision-making processes. Martin has developed a process he calls “Community Based Systems Analysis.” CBSD identifies and removes bias in data gathering and analysis by requiring explicit statements about causal assumptions used in the algorithm to make decisions and by encouraging greater diversity in the development teams. Martin highlighted a case of racial bias in an AI-powered algorithm designed to single out patients with complex health care needs for additional care. The AI team’s causal theory was that patient spending on health care is a predictor of complexity of need. Unfortunately, African Americans, on average, spend less on health care and were, therefore, not selected for needed care.
Stanford bioinformatics expert Jonathan Chen closed with an upbeat look at “the wisdom of the crowd.” Chen has developed an AI-powered tool that recommends medical care based on how other doctors have treated patients with similar symptoms — like an ecommerce recommender engine that says, “You might like this book,” based on what others have read. AI holds the promise, Chen says, of being able to amplify our knowledge and capabilities while leading to better, more equitable decisions.
Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.