Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
AI Reveals How Brain Activity Unfolds Over Time | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

AI Reveals How Brain Activity Unfolds Over Time

Date
January 21, 2026
Topics
Healthcare
Sciences (Social, Health, Biological, Physical)
Medical Brain Scans on Multiple Computer Screens. Advanced Neuroimaging Technology Reveals Complex Neural Pathways, Display Showing CT Scan in a Modern Medical Environment
istock

Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease.

Brain monitoring tools like functional MRI (fMRI) and EEG have long allowed neuroscientists to observe the brain at work — thinking, feeling, talking, doing. They can pinpoint where thoughts emerge in the brain. They can measure how strong the activity is. And they can watch as brain activity evolves through the brain over time. What they haven’t been able to do is interpret what it all means. 

Now, researchers at Stanford University say they have applied deep learning to decipher such complex brain activity — in two and, in some cases, three dimensions and over long time scales — to provide neuroscientific insights that were once beyond scientists’ reach. The approach could reshape fields from psychology to oncology.

Space and Time

The problem to date has been the data — the brain data are intertwined in spatial and temporal dimensions—there’s too much of it and it’s too complex to comprehend without a reliable analysis tool. Indeed, signals captured spatially across multiple regions of the brain, changing all the while, are overwhelming and unmanageable, even by scientists. 

"It’s a four-dimensional problem in the case of fMRI,” says Lei Xing, professor of medical physics in the Department of Radiation Oncology and professor of electrical engineering (by courtesy) in Department of Electrical Engineering at Stanford University, who is the senior author of a study explaining the new model published in the journal Nature Computational Science. “The signal from one point in the brain at a specific moment in time correlates to another in a different place and time in a very complex manner that we have struggled to understand completely, leading to fragmented and confusing outputs.”

With the help of AI’s vast computational powers, however, the new approach, known as Brain-dynamic Convolutional-Network-based Embedding, or BCNE for short, distills and interprets all this complex data into a simpler form. BCNE represents brain activity as trajectories of activity through the brain over time. The researchers feed the measured images or other types of data, such as EEG, through their model, filtering out meaningless noise while spotlighting valuable patterns in the data.

“BCNE uses this continuity of time and space to generate dynamic brain state trajectories. It’s like making movies of brain activity," says Zixia Zhou, a post-doctoral researcher in Xing’s lab and first author of the study that was partially sponsored by a seed grant from the Stanford Institute for Human-Centered Artificial Intelligence (HAI). “One can see not only the brain response but how it evolves and travels over time.”

In fact, in one experiment the researchers recorded the brain activity of people watching movies to note how their brains transition from scene to scene and to evaluate changes in perception, emotion and comprehension as the narrative unfolds. In other experiments with lab monkeys and rats, BCNE captured detailed information about how physical movements are signaled from the brain to the muscles and provided other detailed information about the animals’ brain activity.

Open Questions

Xing specializes in biomedical physics and radiation oncology, a field where he projects that BCNE has vast potential to study brain adaptation after treatments to remove brain tumors. In neuroscience, the researchers think BCNE could be used to study memory, learning, decision-making and other ideation processes. In clinics, they predict BCNE could help diagnose and monitor neurological conditions like Parkinson’s, depression and schizophrenia, or potentially to evaluate the effectiveness of therapeutic and pharmaceutical treatments.

In its initial iteration, Xing notes, BCNE is a promising proof of concept of AI’s interpretive capabilities, but there is still much room to grow. Next up, Xing and team are intent on bringing BCNE to clinical applications and exploring real-time brain monitoring and prediction techniques. They would like to refine the method and apply it to more varied and complex datasets, especially those with irregular or limited sampling. They also hope to integrate additional modes, such as MRI and CT scans, to provide evermore complete and insightful brain-state mappings.

“For now, our approach seems to open more questions than it answers,” Xing says. “But there is much opportunity ahead.”

Contributing Stanford authors include: Junyan Liu, Wei Emma Wu, Sheng Liu, Qingyue Wei, Rui Yan and Md Tauhidul Islam (co-corresponding author).

istock
Share
Link copied to clipboard!
Contributor(s)
Andrew Myers

Related News

AI Can’t Do Physics Well – And That’s a Roadblock to Autonomy
Andrew Myers
Jan 26, 2026
News
breaking of pool balls on a pool table

QuantiPhy is a new benchmark and training framework that evaluates whether AI can numerically reason about physical properties in video images. QuantiPhy reveals that today’s models struggle with basic estimates of size, speed, and distance but offers a way forward.

News
breaking of pool balls on a pool table

AI Can’t Do Physics Well – And That’s a Roadblock to Autonomy

Andrew Myers
Computer VisionRoboticsSciences (Social, Health, Biological, Physical)Jan 26

QuantiPhy is a new benchmark and training framework that evaluates whether AI can numerically reason about physical properties in video images. QuantiPhy reveals that today’s models struggle with basic estimates of size, speed, and distance but offers a way forward.

Why 'Zero-Shot' Clinical Predictions Are Risky
Suhana Bedi, Jason Alan Fries, and Nigam H. Shah
Jan 07, 2026
News
Doctor reviews a tablet in the foreground while other doctors and nurses stand over a medical bed in the background

These models generate plausible timelines from historical patterns; without calibration and auditing, their “probabilities” may not reflect reality.

News
Doctor reviews a tablet in the foreground while other doctors and nurses stand over a medical bed in the background

Why 'Zero-Shot' Clinical Predictions Are Risky

Suhana Bedi, Jason Alan Fries, and Nigam H. Shah
HealthcareFoundation ModelsJan 07

These models generate plausible timelines from historical patterns; without calibration and auditing, their “probabilities” may not reflect reality.

Stanford Researchers: AI Reality Check Imminent
Forbes
Dec 23, 2025
Media Mention

Shana Lynch, HAI Head of Content and Associate Director of Communications, pointed out the "'era of AI evangelism is giving way to an era of AI evaluation,'" in her AI predictions piece, where she interviewed several Stanford AI experts on their insights for AI impacts in 2026.

Media Mention
Your browser does not support the video tag.

Stanford Researchers: AI Reality Check Imminent

Forbes
Generative AIEconomy, MarketsHealthcareCommunications, MediaDec 23

Shana Lynch, HAI Head of Content and Associate Director of Communications, pointed out the "'era of AI evangelism is giving way to an era of AI evaluation,'" in her AI predictions piece, where she interviewed several Stanford AI experts on their insights for AI impacts in 2026.