Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
AI Improves Alzheimer’s Imaging | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

AI Improves Alzheimer’s Imaging

Date
January 09, 2020
Topics
Design, Human-Computer Interaction
Machine Learning
Your browser does not support the video tag.

HAI seed grant helps make Alzheimer’s disease imaging safer and more affordable

Confirming a diagnosis of Alzheimer’s disease requires an expensive PET scan that uses a high dose of full-body radiation. With seed grant support from the Stanford Institute for Human-Centered Artificial Intelligence (HAI), a group of Stanford researchers can now diagnose Alzheimer’s Disease just as successfully by applying artificial intelligence (AI) to low-dose PET scans and simultaneously acquired MRI images. “This work has the advantage for the patients of being safer, lower dose, faster, cheaper all the things you’d want as a patient,” said Greg Zaharchuk, professor of radiology at Stanford University and 2018 HAI seed grantee. Using artificial intelligence, Zaharchuk’s team has become adept at what’s called image transformation. They can take one image or set of images and use a type of AI called a convolutional neural network (CNN) to produce a new set of images as the output. “If the information you want exists in the images you have acquired, then you can train a classifier using a CNN,” Zaharchuk said.  Machine learning approaches like CNNs typically feed a computer a labeled set of data that trains the computer to recognize something in the data. In the case of image transformation work, where the goal is to produce a better image, the image is the label, Zaharchuck says. “Every pixel is the answer I want to predict.” Think, for example, of a grainy image on a black and white TV, said Kevin Chen, a postdoctoral student in Stanford’s radiology department who worked on the low-dose PET project. If a neural net is trained on grainy and crisp images of the same object, it can learn how to output crisp images when given grainy images even without their crisp counterparts. “The human visual system is great for tracking a tiger on the Serengeti,” Zaharchuk said, “but it wasn’t built to see different contrasts like this.” A neural net, by contrast, is agnostic to the challenges of interpreting subtle contrasts. “If there is information in an image, a neural net can efficiently find it,” Zaharchuk said.   In PET imaging for Alzheimer’s diagnosis, the goal is to spot amyloid plaques—hard, insoluble clumps of beta amyloid proteins that accumulate in the brain. These are the defining feature of Alzheimer’s disease. If none are present, the patient does not have the disease. Amyloid plaques seem to be invisible in an ordinary MRI. But when a patient is given a dose of a radioactive tracer that binds to plaques in the brain, a PET scanner can count the signals coming from the radioactive tracer and produce an image. If the image shows bright areas extending through the cortex—a thin band at the edge of the brain—then the person’s brain contains amyloid plaques and has Alzheimer’s disease.   For their initial low-dose amyloid PET/MRI study, which was published in the journal Radiology in 2018, Zaharchuk’s team used an imaging machine that can take PET and MRI images simultaneously. They obtained full-dose PET/MRI images for 39 people, and then simulated low-dose PET/MRI scans for the same people by randomly extracting 1% of the counts from the full-dose PET scans. This simulated dose was roughly equivalent to the radiation exposure a person receives during a transcontinental flight.   When they fed their images into their CNN, they found that combining PET with MRI scans yielded output images that were much clearer than images generated using PET scans alone. “This really speaks to how PET and MR complement each other,” Chen said. Even more striking and important: Outputs from the low-dose PET plus MRI model were just as good at revealing the presence or absence of amyloid plaques as the full-dose PET/MRI scan.   One key test remained: The team wanted to know whether the simulated result would hold true for an actual low-dose PET scan. Using HAI seed grant funding, Zaharchuk’s team obtained low- and high-dose PET images as well as simultaneously acquired MRI images from 18 patients. Because the company that makes the radioactive tracer could only sell Zaharchuk the FDA-approved full dose, the team had to create the low dose for each patient one drop at a time. Although the results of the study are not yet published, they are promising. “The quantitative image quality is very similar to the simulation,” Zaharchuk said.   Going forward, the team wants to determine whether a CNN can be trained to spot amyloid in an MRI image alone—without the need for any radiation dose. “It would be very liberating to no longer need a PET scanner,” Zaharchuk said. They will also test low doses of different radioactive tracers for other signs of Alzheimer’s disease, such as tau neurofibrillary tangles. And they will look at whether they can scan for amyloid and tau at the same time. In all of this work, AI will play a key role. “It’s a very exciting time for our field,” Zaharchuk said. “AI is basically extending our eyes to see things we couldn’t see before.” Zaharchuk predicts that as time goes by, amyloid imaging will be more useful and more commonplace. For example, if a drug for Alzheimer’s disease is approved by the FDA (there’s one currently in the pipeline) doctors will need to order amyloid scanning to determine eligibility for the drug as well as to track patients’ disease progression and see if the drug is working. Moreover, as baby boomers age and the number of people suffering from Alzheimer’s disease soars, the need for imaging diagnostics will only grow. The use of AI for image transformation will ensure that safe, low-dose, affordable Alzheimer’s imaging will be available to meet these future needs.

Share
Link copied to clipboard!
Contributor(s)
Katharine Miller
Related
  • AI+HEALTH 2024
    conferenceDec 10, 2024
    December
    10
    2024

    Join us online December 10-11, for Stanford's largest annual convening of AI+HEALTH experts and learners

  • AI+HEALTH 2025
    conferenceDec 09, 2025
    December
    09
    2025

    Join us online December 9-10, for Stanford's largest annual convening of AI+HEALTH experts and learners

Related News

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Stanford’s Yejin Choi & Axios’ Ina Fried
Axios
Jan 19, 2026
Media Mention

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Media Mention
Your browser does not support the video tag.

Stanford’s Yejin Choi & Axios’ Ina Fried

Axios
Energy, EnvironmentMachine LearningGenerative AIEthics, Equity, InclusionJan 19

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

How AI Shook The World In 2025 And What Comes Next
CNN Business
Dec 30, 2025
Media Mention

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.

Media Mention
Your browser does not support the video tag.

How AI Shook The World In 2025 And What Comes Next

CNN Business
Industry, InnovationHuman ReasoningEnergy, EnvironmentDesign, Human-Computer InteractionGenerative AIWorkforce, LaborEconomy, MarketsDec 30

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.