Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Introducing the Center for Research on Foundation Models (CRFM) | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
newsAnnouncement

Introducing the Center for Research on Foundation Models (CRFM)

Date
August 18, 2021

This new center at Stanford convenes scholars from across the university to study the technical principles and societal impact of foundation models.

A new initiative brings together more than 175 researchers across 10+ departments at Stanford University to understand and build a new type of technology that will power artificial intelligence (AI) systems in the future.

The Center for Research on Foundation Models (CRFM) is a new interdisciplinary initiative born out of the Stanford Institute for Human-Centered Artificial Intelligence (HAI) that aims to make fundamental advances in the study, development, and deployment of foundation models. Foundation models (e.g., BERT, GPT-3, CLIP, Codex) are models trained on broad data at scale such that they can be adapted to a wide range of downstream tasks. These models will not only transform how AI systems are built, but will also lead to significant societal consequences.

To better understand and shape this paradigm shift in AI, the CRFM brings together researchers to study the underlying technology (e.g., model architectures and training procedures, data and systems, evaluation and theory), its potential for high-impact applications (e.g., in healthcare, biomedicine, law, education), and its societal implications (e.g., economic and environmental effects, legal and ethical considerations, risks with respect to privacy, security, misuse and inequity).

An important part of conducting this research and shaping its direction is the ability to experiment with and build next-generation foundation models. Unfortunately, building these models is currently not accessible: the resources (engineering expertise, compute) needed to train these models are highly concentrated in industry and even the assets (data, code) required to reproduce their training are often not released. 

A major focus of CRFM is to develop open, easy-to-use tools, as well as rigorous principles, for training and evaluating foundation models so that a more diverse set of participants can meaningfully critique and improve them.

“When we hear about GPT-3 or BERT, we’re drawn to their ability to generate text, code, and images, but more fundamentally and invisibly, these models are radically changing how AI systems will be built,” says Percy Liang, the director of CRFM, who is a Stanford associate professor of computer science and faculty member of HAI. “Our center will study and build foundation models from a multidisciplinary perspective, convening scholars from computer science, economics, social science, law, philosophy, and others.” 

The center has already produced an in-depth, 200-page report, On The Opportunities and Risks of Foundation Models. The paper, authored by more than 100 scholars across Stanford, investigates the core capabilities, key applications, technical principles, and broader societal ramifications of these models.

To complement the release of this comprehensive report, the center will host the Workshop on Foundation Models, which will open up the discussion to researchers representing a variety of perspectives from both academia and industry to provide vital expertise on the myriad dimensions of foundation models.

“This new center embodies the spirit of HAI by fostering interdisciplinary scholarship on foundation models with a focus on the range of human-centered issues that these models entail.  It will be a home at Stanford for the open scientific study and development of foundation models and work with the broader AI community in establishing professional norms for their use,” says HAI Denning Co-Director John Etchemendy. 

Learn more about the center by visiting its website, or register for the Workshop on Foundation Models. 

 

Share
Link copied to clipboard!
Contributor(s)
dfb50f0b-2037-488a-a437-16ec143679e4
Related
  • Percy Liang
    Associate Professor of Computer Science, Stanford University | Director, Stanford Center for Research on Foundation Models | Senior Fellow, Stanford HAI
    Percy Liang

Related News

The Evolution of Safety: Stanford’s Mykel Kochenderfer Explores Responsible AI in High-Stakes Environments
Scott Hadly
May 09, 2025
News

As AI technologies rapidly evolve, Professor Kochenderfer leads the charge in developing effective validation mechanisms to ensure safety in autonomous systems like vehicles and drones.

News

The Evolution of Safety: Stanford’s Mykel Kochenderfer Explores Responsible AI in High-Stakes Environments

Scott Hadly
Privacy, Safety, SecurityMay 09

As AI technologies rapidly evolve, Professor Kochenderfer leads the charge in developing effective validation mechanisms to ensure safety in autonomous systems like vehicles and drones.

How Stanford HAI Defines Human-Centered AI With Executive Director Russell Wald
Technovation
May 08, 2025
Media Mention

In this podcast, HAI Executive Director Russell Wald explores how universities, policymakers, and industry must collaborate to keep AI human-centered. Wald shares takeaways from the AI Index, explains how China is narrowing the performance gap, and outlines why academic institutions are vital to ethical AI leadership.

Media Mention
Your browser does not support the video tag.

How Stanford HAI Defines Human-Centered AI With Executive Director Russell Wald

Technovation
May 08

In this podcast, HAI Executive Director Russell Wald explores how universities, policymakers, and industry must collaborate to keep AI human-centered. Wald shares takeaways from the AI Index, explains how China is narrowing the performance gap, and outlines why academic institutions are vital to ethical AI leadership.

Ambient Intelligence, Human Impact
May 07, 2025
News

Health care providers struggle to catch early signals of cognitive decline. AI and computational neuroscientist Ehsan Adeli’s innovative computer vision tools may offer a solution.

News

Ambient Intelligence, Human Impact

HealthcareComputer VisionMay 07

Health care providers struggle to catch early signals of cognitive decline. AI and computational neuroscientist Ehsan Adeli’s innovative computer vision tools may offer a solution.