Skip to content Skip to navigation

AI Improves Alzheimer’s Imaging

AI Improves Alzheimer’s Imaging

January 10, 2020
HAI seed grant helps make Alzheimer’s disease imaging safer and more affordable

Confirming a diagnosis of Alzheimer’s disease requires an expensive PET scan that uses a high dose of full-body radiation. With seed grant support from the Stanford Institute for Human-Centered Artificial Intelligence (HAI), a group of Stanford researchers can now diagnose Alzheimer’s Disease just as successfully by applying artificial intelligence (AI) to low-dose PET scans and simultaneously acquired MRI images. “This work has the advantage for the patients of being safer, lower dose, faster, cheaper all the things you’d want as a patient,” said Greg Zaharchuk, professor of radiology at Stanford University and 2018 HAI seed grantee. Using artificial intelligence, Zaharchuk’s team has become adept at what’s called image transformation. They can take one image or set of images and use a type of AI called a convolutional neural network (CNN) to produce a new set of images as the output. “If the information you want exists in the images you have acquired, then you can train a classifier using a CNN,” Zaharchuk said. 

Machine learning approaches like CNNs typically feed a computer a labeled set of data that trains the computer to recognize something in the data. In the case of image transformation work, where the goal is to produce a better image, the image is the label, Zaharchuck says. “Every pixel is the answer I want to predict.” Think, for example, of a grainy image on a black and white TV, said Kevin Chen, a postdoctoral student in Stanford’s radiology department who worked on the low-dose PET project. If a neural net is trained on grainy and crisp images of the same object, it can learn how to output crisp images when given grainy images even without their crisp counterparts. “The human visual system is great for tracking a tiger on the Serengeti,” Zaharchuk said, “but it wasn’t built to see different contrasts like this.” A neural net, by contrast, is agnostic to the challenges of interpreting subtle contrasts. “If there is information in an image, a neural net can efficiently find it,” Zaharchuk said.

 
In PET imaging for Alzheimer’s diagnosis, the goal is to spot amyloid plaques—hard, insoluble clumps of beta amyloid proteins that accumulate in the brain. These are the defining feature of Alzheimer’s disease. If none are present, the patient does not have the disease. Amyloid plaques seem to be invisible in an ordinary MRI. But when a patient is given a dose of a radioactive tracer that binds to plaques in the brain, a PET scanner can count the signals coming from the radioactive tracer and produce an image. If the image shows bright areas extending through the cortex—a thin band at the edge of the brain—then the person’s brain contains amyloid plaques and has Alzheimer’s disease.
 
For their initial low-dose amyloid PET/MRI study, which was published in the journal Radiology in 2018, Zaharchuk’s team used an imaging machine that can take PET and MRI images simultaneously. They obtained full-dose PET/MRI images for 39 people, and then simulated low-dose PET/MRI scans for the same people by randomly extracting 1% of the counts from the full-dose PET scans. This simulated dose was roughly equivalent to the radiation exposure a person receives during a transcontinental flight.
 
When they fed their images into their CNN, they found that combining PET with MRI scans yielded output images that were much clearer than images generated using PET scans alone. “This really speaks to how PET and MR complement each other,” Chen said. Even more striking and important: Outputs from the low-dose PET plus MRI model were just as good at revealing the presence or absence of amyloid plaques as the full-dose PET/MRI scan.
 
One key test remained: The team wanted to know whether the simulated result would hold true for an actual low-dose PET scan. Using HAI seed grant funding, Zaharchuk’s team obtained low- and high-dose PET images as well as simultaneously acquired MRI images from 18 patients. Because the company that makes the radioactive tracer could only sell Zaharchuk the FDA-approved full dose, the team had to create the low dose for each patient one drop at a time. Although the results of the study are not yet published, they are promising. “The quantitative image quality is very similar to the simulation,” Zaharchuk said.
 
Going forward, the team wants to determine whether a CNN can be trained to spot amyloid in an MRI image alone—without the need for any radiation dose. “It would be very liberating to no longer need a PET scanner,” Zaharchuk said. They will also test low doses of different radioactive tracers for other signs of Alzheimer’s disease, such as tau neurofibrillary tangles. And they will look at whether they can scan for amyloid and tau at the same time. In all of this work, AI will play a key role. “It’s a very exciting time for our field,” Zaharchuk said. “AI is basically extending our eyes to see things we couldn’t see before.” Zaharchuk predicts that as time goes by, amyloid imaging will be more useful and more commonplace. For example, if a drug for Alzheimer’s disease is approved by the FDA (there’s one currently in the pipeline) doctors will need to order amyloid scanning to determine eligibility for the drug as well as to track patients’ disease progression and see if the drug is working. Moreover, as baby boomers age and the number of people suffering from Alzheimer’s disease soars, the need for imaging diagnostics will only grow. The use of AI for image transformation will ensure that safe, low-dose, affordable Alzheimer’s imaging will be available to meet these future needs.