Skip to main content Skip to secondary navigation
Page Content

Introducing the Center for Research on Foundation Models (CRFM)

This new center at Stanford convenes scholars from across the university to study the technical principles and societal impact of foundation models.

ai - artificial intelligence and deep learning concept of neural networks. Wave equalizer. Blue and purple lines. Vector illustration

A new initiative brings together more than 175 researchers across 10+ departments at Stanford University to understand and build a new type of technology that will power artificial intelligence (AI) systems in the future.

The Center for Research on Foundation Models (CRFM) is a new interdisciplinary initiative born out of the Stanford Institute for Human-Centered Artificial Intelligence (HAI) that aims to make fundamental advances in the study, development, and deployment of foundation models. Foundation models (e.g., BERT, GPT-3, CLIP, Codex) are models trained on broad data at scale such that they can be adapted to a wide range of downstream tasks. These models will not only transform how AI systems are built, but will also lead to significant societal consequences.

To better understand and shape this paradigm shift in AI, the CRFM brings together researchers to study the underlying technology (e.g., model architectures and training procedures, data and systems, evaluation and theory), its potential for high-impact applications (e.g., in healthcare, biomedicine, law, education), and its societal implications (e.g., economic and environmental effects, legal and ethical considerations, risks with respect to privacy, security, misuse and inequity).

An important part of conducting this research and shaping its direction is the ability to experiment with and build next-generation foundation models. Unfortunately, building these models is currently not accessible: the resources (engineering expertise, compute) needed to train these models are highly concentrated in industry and even the assets (data, code) required to reproduce their training are often not released. 

A major focus of CRFM is to develop open, easy-to-use tools, as well as rigorous principles, for training and evaluating foundation models so that a more diverse set of participants can meaningfully critique and improve them.

“When we hear about GPT-3 or BERT, we’re drawn to their ability to generate text, code, and images, but more fundamentally and invisibly, these models are radically changing how AI systems will be built,” says Percy Liang, the director of CRFM, who is a Stanford associate professor of computer science and faculty member of HAI. “Our center will study and build foundation models from a multidisciplinary perspective, convening scholars from computer science, economics, social science, law, philosophy, and others.” 

The center has already produced an in-depth, 200-page report, On The Opportunities and Risks of Foundation Models. The paper, authored by more than 100 scholars across Stanford, investigates the core capabilities, key applications, technical principles, and broader societal ramifications of these models.

To complement the release of this comprehensive report, the center will host the Workshop on Foundation Models, which will open up the discussion to researchers representing a variety of perspectives from both academia and industry to provide vital expertise on the myriad dimensions of foundation models.

“This new center embodies the spirit of HAI by fostering interdisciplinary scholarship on foundation models with a focus on the range of human-centered issues that these models entail.  It will be a home at Stanford for the open scientific study and development of foundation models and work with the broader AI community in establishing professional norms for their use,” says HAI Denning Co-Director John Etchemendy

Learn more about the center by visiting its website, or register for the Workshop on Foundation Models