Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Three Reasons Why Universities are Crucial for Understanding AI | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Three Reasons Why Universities are Crucial for Understanding AI

Date
September 05, 2025
Topics
Sciences (Social, Health, Biological, Physical)
Your browser does not support the video tag.

There is a “fierce urgency” to understand how AI works, says Stanford physicist Surya Ganguli, who is leading a project to bring the inner workings of AI to light through transparent, foundational research. 

Artificial intelligence is already transforming almost every aspect of human work and life: It can perform surgery, write code, and even make art. While it is a powerful tool, no one fully understands how AI learns or reasons—not even the companies developing it.

This is where the academic mission to conduct open, scientific research can make a real difference, says Surya Ganguli. The Stanford physicist is leading “The Physics of Learning and Neural Computation,” a collaborative project recently launched by the Simons Foundation that brings together physicists, computer scientists, mathematicians, and neuroscientists to help break AI out of its proverbial “black box.” 

Surya Ganguli headshotSurya Ganguli will oversee a collaboration called The Physics of Learning and Neural Computation.

“We need to bring the power of our best theoretical ideas from many fields to confront the challenge of scientifically understanding one of the most important technologies to have appeared in decades,” said Ganguli, associate professor of applied physics in Stanford's School of Humanities and Sciences. “For something that’s of such societal importance, we have got to do it in academia, where we can share what we learn openly with the world.”

There are many compelling reasons why this work needs to be done by universities, says Ganguli, who is also a senior fellow at the Stanford Institute for Human-Centered AI. Here are three: 

Improving Scientific Understanding

The companies on the frontier of AI technology are more focused on improving performance, without necessarily having a complete scientific understanding of how the technology works, Ganguli contends. 

“It’s imperative that the science catches up with the engineering,” he said. “The engineering of AI is way ahead, so we need a concerted, all-hands-on-deck approach to advance the science.”

AI systems are developed very differently than something like a car, with physical parts that are explicitly designed and rigorously tested. AI neural networks are inspired by the human brain, with a multitude of connections. These connections are then implicitly trained using data. 

Ganguli likens that training to human learning: We educate children by giving them information and correct them when they are wrong. We know when a child learns a word like cat or a concept like generosity, but we do not know explicitly what happens in the brain to acquire that knowledge.

The same is true of AI, but it makes strange mistakes that a human would never make. Researchers believe it is critical to understand why for both practical and ethical reasons. 

“AI systems are derived in a very implicit way, but it’s not clear that we’re baking in the same empathy and caring for humanity that we do in our children,” Ganguli said. “We try a lot of ad hoc stuff to bake human values into these large language models, but it’s not clear that we’ve figured out the best way to do it.”

Physics Can Tackle AI's Complexity

Traditionally, the field of physics has focused on studying complex natural systems. While AI has artificial in its very name, its complexity lends itself well to physics, which has increasingly expanded beyond its historical boundaries to branch into many other fields, including biology and neuroscience. 

Physicists have a lot of experience working with high dimensional systems, Ganguli pointed out. For example, some physicists study materials with many billions of interacting particles with complex, dynamic laws that influence their collective behavior and give rise to surprising, “emergent” properties—new characteristics that arise from the interaction but are not present in the individual particles themselves  

AI is similar, with many billions of weights that constantly change during training, and the project’s main goals are to better understand this process. Specifically, the researchers want to know how learning dynamics, training data, and the architecture of an AI system interact to produce emergent computations such as AI creativity and reasoning, the origins of which are not currently understood. Once this interaction is uncovered, it will likely be easier to control the process by choosing the right data for a given problem. 

It might also be possible to create smaller, more efficient networks that can do more with fewer connections, said project member Eva Silverstein, professor of physics in H&S.  

“It’s not that the extra connections necessarily cause a problem. It’s more that they’re expensive,” she said. “Sometimes they can be pruned after training, but you have to understand a lot about the system—learning and reasoning dynamics, structure of data, and architecture—in order to be able to predict in advance how it’s going to work.”

Ganguli and Silverstein are two of the 17 principal investigators representing 12 universities on the Simons Foundation project. Ganguli hopes to expand participation further, ultimately bringing a new generation of physicists into the AI field. The collaboration will be holding workshops and summer school sessions to build the scientific community. 

Academic Findings Are Shared

Everything that comes out of this collaboration will be shared, with findings vetted and published in peer-reviewed journals. In contrast, companies that need to develop their AI products with the goal of delivering economic returns have little incentive, and no obligation, to share information with others. 

“We need to do open science because walls of secrecy are being erected around these frontier AI companies,” Ganguli said. “I really love being at the university, where our very mission is to share what we learn with the world.”

This story was first published by the Stanford School of Humanities and Sciences.

Share
Link copied to clipboard!
Contributor(s)
Sara Zaske

Related News

AI Can’t Do Physics Well – And That’s a Roadblock to Autonomy
Andrew Myers
Jan 26, 2026
News
breaking of pool balls on a pool table

QuantiPhy is a new benchmark and training framework that evaluates whether AI can numerically reason about physical properties in video images. QuantiPhy reveals that today’s models struggle with basic estimates of size, speed, and distance but offers a way forward.

News
breaking of pool balls on a pool table

AI Can’t Do Physics Well – And That’s a Roadblock to Autonomy

Andrew Myers
Computer VisionRoboticsSciences (Social, Health, Biological, Physical)Jan 26

QuantiPhy is a new benchmark and training framework that evaluates whether AI can numerically reason about physical properties in video images. QuantiPhy reveals that today’s models struggle with basic estimates of size, speed, and distance but offers a way forward.

AI Reveals How Brain Activity Unfolds Over Time
Andrew Myers
Jan 21, 2026
News
Medical Brain Scans on Multiple Computer Screens. Advanced Neuroimaging Technology Reveals Complex Neural Pathways, Display Showing CT Scan in a Modern Medical Environment

Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease.

News
Medical Brain Scans on Multiple Computer Screens. Advanced Neuroimaging Technology Reveals Complex Neural Pathways, Display Showing CT Scan in a Modern Medical Environment

AI Reveals How Brain Activity Unfolds Over Time

Andrew Myers
HealthcareSciences (Social, Health, Biological, Physical)Jan 21

Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease.

Stanford Research Teams Receive New Hoffman-Yee Grant Funding for 2025
Nikki Goth Itoi
Dec 09, 2025
News

Five teams will use the funding to advance their work in biology, generative AI and creativity, policing, and more.

News

Stanford Research Teams Receive New Hoffman-Yee Grant Funding for 2025

Nikki Goth Itoi
Arts, HumanitiesEthics, Equity, InclusionFoundation ModelsGenerative AIHealthcareSciences (Social, Health, Biological, Physical)Dec 09

Five teams will use the funding to advance their work in biology, generative AI and creativity, policing, and more.