Introducing the Center for Research on Foundation Models (CRFM) | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

newsAnnouncement

Introducing the Center for Research on Foundation Models (CRFM)

Date
August 18, 2021

This new center at Stanford convenes scholars from across the university to study the technical principles and societal impact of foundation models.

A new initiative brings together more than 175 researchers across 10+ departments at Stanford University to understand and build a new type of technology that will power artificial intelligence (AI) systems in the future.

The Center for Research on Foundation Models (CRFM) is a new interdisciplinary initiative born out of the Stanford Institute for Human-Centered Artificial Intelligence (HAI) that aims to make fundamental advances in the study, development, and deployment of foundation models. Foundation models (e.g., BERT, GPT-3, CLIP, Codex) are models trained on broad data at scale such that they can be adapted to a wide range of downstream tasks. These models will not only transform how AI systems are built, but will also lead to significant societal consequences.

To better understand and shape this paradigm shift in AI, the CRFM brings together researchers to study the underlying technology (e.g., model architectures and training procedures, data and systems, evaluation and theory), its potential for high-impact applications (e.g., in healthcare, biomedicine, law, education), and its societal implications (e.g., economic and environmental effects, legal and ethical considerations, risks with respect to privacy, security, misuse and inequity).

An important part of conducting this research and shaping its direction is the ability to experiment with and build next-generation foundation models. Unfortunately, building these models is currently not accessible: the resources (engineering expertise, compute) needed to train these models are highly concentrated in industry and even the assets (data, code) required to reproduce their training are often not released. 

A major focus of CRFM is to develop open, easy-to-use tools, as well as rigorous principles, for training and evaluating foundation models so that a more diverse set of participants can meaningfully critique and improve them.

“When we hear about GPT-3 or BERT, we’re drawn to their ability to generate text, code, and images, but more fundamentally and invisibly, these models are radically changing how AI systems will be built,” says Percy Liang, the director of CRFM, who is a Stanford associate professor of computer science and faculty member of HAI. “Our center will study and build foundation models from a multidisciplinary perspective, convening scholars from computer science, economics, social science, law, philosophy, and others.” 

The center has already produced an in-depth, 200-page report, On The Opportunities and Risks of Foundation Models. The paper, authored by more than 100 scholars across Stanford, investigates the core capabilities, key applications, technical principles, and broader societal ramifications of these models.

To complement the release of this comprehensive report, the center will host the Workshop on Foundation Models, which will open up the discussion to researchers representing a variety of perspectives from both academia and industry to provide vital expertise on the myriad dimensions of foundation models.

“This new center embodies the spirit of HAI by fostering interdisciplinary scholarship on foundation models with a focus on the range of human-centered issues that these models entail.  It will be a home at Stanford for the open scientific study and development of foundation models and work with the broader AI community in establishing professional norms for their use,” says HAI Denning Co-Director John Etchemendy. 

Learn more about the center by visiting its website, or register for the Workshop on Foundation Models. 

 

Share
Link copied to clipboard!
Contributor(s)
dfb50f0b-2037-488a-a437-16ec143679e4
Related
  • Percy Liang
    Associate Professor of Computer Science, Stanford University | Director, Stanford Center for Research on Foundation Models | Senior Fellow, Stanford HAI
    Percy Liang

Related News

AI Challenges Core Assumptions in Education
Shana Lynch
Feb 19, 2026
News

We need to rethink student assessment, AI literacy, and technology’s usefulness, according to experts at the recent AI+Education Summit.

News

AI Challenges Core Assumptions in Education

Shana Lynch
Education, SkillsGenerative AIPrivacy, Safety, SecurityFeb 19

We need to rethink student assessment, AI literacy, and technology’s usefulness, according to experts at the recent AI+Education Summit.

AI Sovereignty’s Definitional Dilemma
Juan Pava, Caroline Meinhardt, Elena Cryst, James Landay
Feb 17, 2026
News
illustration showing world and digital lines and binary code

Governments worldwide are racing to control their AI futures, but unclear definitions hinder real policy progress.

News
illustration showing world and digital lines and binary code

AI Sovereignty’s Definitional Dilemma

Juan Pava, Caroline Meinhardt, Elena Cryst, James Landay
Government, Public AdministrationRegulation, Policy, GovernanceInternational Affairs, International Security, International DevelopmentFeb 17

Governments worldwide are racing to control their AI futures, but unclear definitions hinder real policy progress.

America's 250 Greatest Innovators: Celebrating The American Dream
Forbes
Feb 11, 2026
Media Mention

HAI Co-Director Fei-Fei Li named one of America's top 250 greatest innovators, alongside fellow Stanford affiliates Rodney Brooks, Carolyn Bertozzi, Daphne Koller, and Andrew Ng.

Media Mention
Your browser does not support the video tag.

America's 250 Greatest Innovators: Celebrating The American Dream

Forbes
Computer VisionGenerative AIFoundation ModelsEnergy, EnvironmentEthics, Equity, InclusionFeb 11

HAI Co-Director Fei-Fei Li named one of America's top 250 greatest innovators, alongside fellow Stanford affiliates Rodney Brooks, Carolyn Bertozzi, Daphne Koller, and Andrew Ng.