Universities Must Reclaim AI Research for the Public Good | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

Universities Must Reclaim AI Research for the Public Good

Date
October 30, 2025
abstract collage of scientists and university imagery

With corporate AI labs turning inward, academia must carry forward the mantle of open science.

Ten years ago, Mark Zuckerberg made a surprise appearance at the academic conference NeurIPS, announcing the launch of Facebook’s Fundamental AI Research unit (FAIR), signalling that AI research had leapt from university labs into the heart of Big Tech.

Fast-forward to today: Meta announced drastic cuts to FAIR even as AI becomes a trillion-dollar global industry. DeepMind no longer publishes technical details of their leading AI models and has introduced six-month embargoes and stricter internal review of papers to maintain competitive advantage. Similarly, OpenAI is now ClosedAI and, like other corporate labs, increasingly favor tech blogs and internal product rollouts rather than peer-reviewed publication or open-source release. 

The tide of openness in AI is receding — and with it, the foundation of scientific progress itself.

What We Mean by “Public Good”

Open science is, first and foremost, a public good — knowledge that benefits everyone rather than a select few. When research is shared openly, innovation accelerates, duplication is minimized, and ideas build upon one another. In AI research, these shared open-source tools, datasets, libraries, and benchmarks have enabled progress that emerged from one lab and spread globally — from students, to startups, to large industry deployments.

But when AI knowledge becomes privatized, we lose more than transparency: we lose the cross-pollination of ideas that drives genuine scientific progress. Universities and public institutions are uniquely positioned to sustain this public-good role because they are not structured primarily around shareholder return or product rollout; they can prioritise openness, reproducibility, talent training, and global participation.

How Openness Built Modern AI

The history of artificial intelligence is inseparable from the history of open science:

• The back-propagation algorithm, first shared openly in the 1980s, enabled deep learning’s revival.

• Successful deep learning techniques were then pioneered at universities, particularly for speech and image recognition in Geoff Hinton’s lab at the University of Toronto.

• Open datasets like TIMIT, TREC, MNIST, ImageNet, and Stanford Alpaca provided reproducible benchmarks and common ground for AI progress.

• Open-source code/libraries such as the Stanford CoreNLP toolkit and later TensorFlow, PyTorch, and FlashAttention offered free access to cutting-edge techniques.

• Shared benchmarks and challenges (e.g., GLUE, ImageNet competitions) trained generations of AI researchers and engineers.

This ecosystem created a flywheel of innovation: researchers published code and data, others used it and improved it; students learned from it; startups and industry translated those advances into products. This was not incidental — it was the public-good function of open science in action. Given that context, the current corporate retreat from openness is concerning. It signals a shift from science as shared endeavour to research as proprietary product strategy.

Industry’s Retreat — and Talent Market Failure

The retreat from open science is understandable: corporate AI labs face immense commercial pressures and fierce competition. Models are expensive, research is costly, and first-mover advantage counts. Yet this shift has wider consequences for the public good and for education.

One stark indicator is the talent market: reports suggest that Meta Platforms offered signing packages on the order of $100 million or more to top AI researchers in a desperate bid to secure elite talent. 

This signifies a market failure of universities — the institutions that should be training the next generation of talent simply don’t have enough compute, data, nor the right balance of researchers and software engineers to serve the demand for experts in large AI model development. Research students working as parts of these larger teams is the right way to learn these important skills. If universities cannot train students in the ways required for future jobs, then we lose not only individual opportunity but the broader workforce capacity necessary for innovation and public-good research.

The University’s Moment and the Public Good of Openness

Now is the moment for universities to reassert their historic role in advancing AI as a public good. Academia and the nonprofit sector have the capacity to prioritise openness, ethics, shared infrastructure, and global access over short-term commercial gains.

This means investing in open-data and open-model initiatives that remain freely accessible for research and education; building global partnerships that share compute, data, and expertise across borders and disciplines — so that knowledge is not siloed in just a few companies or countries; and fostering interdisciplinary team science that integrates social science, ethics, design together with technical AI research to ensure AI serves human needs and societal values.

Universities must not only publish, but also sustain the public-good ecosystem that open science represents. In doing so, they preserve the foundation of talent development and discovery that powers every AI breakthrough.

Carrying the Mantle Forward

At the Stanford Institute for Human-Centered Artificial Intelligence (HAI), we believe that the next chapter of AI must combine scientific openness with human-centered values like dignity, equity, and the common good. While industry prioritizes product and competitive advantage, our aim is to cultivate a network of global collaborations between like-minded universities, governments, nonprofits, and industry partners that uphold the public-good mission. This is not about branding or competition — it’s about stewardship of the institutions and practices of open science. 

The most important problems facing the world need a new approach to scientific research: team science. Team science requires not only larger collaborations of interdisciplinary academic researchers and software engineers that today only industry has, but also the computation and data to go along with it. We need new academic models to realize the breakthroughs that team science will unlock: distributed university-based research centers, connected across continents, that share leadership, data, computation, models, and professional talent. The work in these centers will focus on human flourishing rather than commercial exclusivity.

The question is whether we will rebuild the institutions of open science that made AI possible in the first place — or instead allow them to be eroded by concentrated commercial power. We have a fleeting opportunity to shape the trajectory of AI before it shapes us.

John Etchemendy, James Landay, and Fei-Fei Li are co-directors of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and Christopher Manning is an associate director of Stanford HAI.

Share
Link copied to clipboard!
Authors
  • John Etchemendy
    John Etchemendy
  • James Landay
    James Landay
  • fei fei li headshot
    Fei-Fei Li
  • Chris Manning headshot
    Christopher Manning

Related News

The Profound Way America And China Are Diverging On AI
Washington Post
Apr 28, 2026
Media Mention

HAI Executive Director Russell Wald and AI Index Lead Sha Sajadieh discuss the trends in the 2026 AI Index regarding the stark contrast between American public sentiment and Chinese public sentiment when asked their excitement levels for AI adoption.

Media Mention
Your browser does not support the video tag.

The Profound Way America And China Are Diverging On AI

Washington Post
DemocracyGovernment, Public AdministrationApr 28

HAI Executive Director Russell Wald and AI Index Lead Sha Sajadieh discuss the trends in the 2026 AI Index regarding the stark contrast between American public sentiment and Chinese public sentiment when asked their excitement levels for AI adoption.

What Is AI Sovereignty And Why Are Companies Chasing After It?
IT Brew
Apr 27, 2026
Media Mention

"Countries pursue AI sovereignty with four main objectives in mind: cultural autonomy, national security, economic competitiveness, and regulatory oversight," says Juan N. Pava, Stanford HAI Research Fellow.

Media Mention
Your browser does not support the video tag.

What Is AI Sovereignty And Why Are Companies Chasing After It?

IT Brew
DemocracyInternational Affairs, International Security, International DevelopmentRegulation, Policy, GovernancePrivacy, Safety, SecurityApr 27

"Countries pursue AI sovereignty with four main objectives in mind: cultural autonomy, national security, economic competitiveness, and regulatory oversight," says Juan N. Pava, Stanford HAI Research Fellow.

An AI Health Coach Could Change Your Mindset
Katharine Miller
Apr 23, 2026
News
A runner with a smartphone laces her shoes

Bloom, a health coaching app created by Stanford researchers, helps people tap into their own motivations.

News
A runner with a smartphone laces her shoes

An AI Health Coach Could Change Your Mindset

Katharine Miller
HealthcareGenerative AIApr 23

Bloom, a health coaching app created by Stanford researchers, helps people tap into their own motivations.