Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
AI Reveals How Brain Activity Unfolds Over Time | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

AI Reveals How Brain Activity Unfolds Over Time

Date
January 21, 2026
Topics
Healthcare
Sciences (Social, Health, Biological, Physical)
Medical Brain Scans on Multiple Computer Screens. Advanced Neuroimaging Technology Reveals Complex Neural Pathways, Display Showing CT Scan in a Modern Medical Environment
istock

Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease.

Brain monitoring tools like functional MRI (fMRI) and EEG have long allowed neuroscientists to observe the brain at work — thinking, feeling, talking, doing. They can pinpoint where thoughts emerge in the brain. They can measure how strong the activity is. And they can watch as brain activity evolves through the brain over time. What they haven’t been able to do is interpret what it all means. 

Now, researchers at Stanford University say they have applied deep learning to decipher such complex brain activity — in two and, in some cases, three dimensions and over long time scales — to provide neuroscientific insights that were once beyond scientists’ reach. The approach could reshape fields from psychology to oncology.

Space and Time

The problem to date has been the data — the brain data are intertwined in spatial and temporal dimensions—there’s too much of it and it’s too complex to comprehend without a reliable analysis tool. Indeed, signals captured spatially across multiple regions of the brain, changing all the while, are overwhelming and unmanageable, even by scientists. 

"It’s a four-dimensional problem in the case of fMRI,” says Lei Xing, professor of medical physics in the Department of Radiation Oncology and professor of electrical engineering (by courtesy) in Department of Electrical Engineering at Stanford University, who is the senior author of a study explaining the new model published in the journal Nature Computational Science. “The signal from one point in the brain at a specific moment in time correlates to another in a different place and time in a very complex manner that we have struggled to understand completely, leading to fragmented and confusing outputs.”

With the help of AI’s vast computational powers, however, the new approach, known as Brain-dynamic Convolutional-Network-based Embedding, or BCNE for short, distills and interprets all this complex data into a simpler form. BCNE represents brain activity as trajectories of activity through the brain over time. The researchers feed the measured images or other types of data, such as EEG, through their model, filtering out meaningless noise while spotlighting valuable patterns in the data.

“BCNE uses this continuity of time and space to generate dynamic brain state trajectories. It’s like making movies of brain activity," says Zixia Zhou, a post-doctoral researcher in Xing’s lab and first author of the study that was partially sponsored by a seed grant from the Stanford Institute for Human-Centered Artificial Intelligence (HAI). “One can see not only the brain response but how it evolves and travels over time.”

In fact, in one experiment the researchers recorded the brain activity of people watching movies to note how their brains transition from scene to scene and to evaluate changes in perception, emotion and comprehension as the narrative unfolds. In other experiments with lab monkeys and rats, BCNE captured detailed information about how physical movements are signaled from the brain to the muscles and provided other detailed information about the animals’ brain activity.

Open Questions

Xing specializes in biomedical physics and radiation oncology, a field where he projects that BCNE has vast potential to study brain adaptation after treatments to remove brain tumors. In neuroscience, the researchers think BCNE could be used to study memory, learning, decision-making and other ideation processes. In clinics, they predict BCNE could help diagnose and monitor neurological conditions like Parkinson’s, depression and schizophrenia, or potentially to evaluate the effectiveness of therapeutic and pharmaceutical treatments.

In its initial iteration, Xing notes, BCNE is a promising proof of concept of AI’s interpretive capabilities, but there is still much room to grow. Next up, Xing and team are intent on bringing BCNE to clinical applications and exploring real-time brain monitoring and prediction techniques. They would like to refine the method and apply it to more varied and complex datasets, especially those with irregular or limited sampling. They also hope to integrate additional modes, such as MRI and CT scans, to provide evermore complete and insightful brain-state mappings.

“For now, our approach seems to open more questions than it answers,” Xing says. “But there is much opportunity ahead.”

Contributing Stanford authors include: Junyan Liu, Wei Emma Wu, Sheng Liu, Qingyue Wei, Rui Yan and Md Tauhidul Islam (co-corresponding author).

istock
Share
Link copied to clipboard!
Contributor(s)
Andrew Myers

Related News

What Your Phone Knows Could Help Scientists Understand Your Health
Katharine Miller
Mar 04, 2026
News
Woman using social media microblogging app on her smart phone

Stanford scientists have released an open-source platform that lets health researchers study the “screenome” – the digital traces of our daily lives – while protecting participants’ privacy.

News
Woman using social media microblogging app on her smart phone

What Your Phone Knows Could Help Scientists Understand Your Health

Katharine Miller
HealthcareMar 04

Stanford scientists have released an open-source platform that lets health researchers study the “screenome” – the digital traces of our daily lives – while protecting participants’ privacy.

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform
Dylan Walsh
Mar 03, 2026
News

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.

News

How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform

Dylan Walsh
Computer VisionHealthcareSciences (Social, Health, Biological, Physical)Machine LearningMar 03

Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.

From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems
Nikki Goth Itoi
Feb 27, 2026
News

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.

News

From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems

Nikki Goth Itoi
Generative AIHealthcarePrivacy, Safety, SecurityComputer VisionSciences (Social, Health, Biological, Physical)Feb 27

An Amazon-backed fellowship will support 10 Stanford PhD students whose work explores everything from how we communicate to understanding disease and protecting our data.