Percy Liang on the Center for Research on Foundation Models' First and Next 30 Years | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

Percy Liang on the Center for Research on Foundation Models' First and Next 30 Years

Date
June 30, 2022
Topics
Natural Language Processing

In this podcast, two scholars discuss being 'weirded out' by GPT-3, unresolved questions in building best practices, and the future of CRFM.

Center for Research on Foundation Models Director Percy Liang recently appeared on the CS224U podcast to help commemorate the one-year anniversary of the center. The discussion ranged widely over the origins of foundation models (FMs), the recent past and long-term future of CRFM, and the major ethical and societal questions facing FM research right how.

Visit the podcast page to listen to the full episode, read the transcript, and check out the extensive show notes.

 

Percy and I began our discussion by reflecting on CRFM’s first year. The center, born out of Stanford HAI, held an agenda-setting workshop that included many diverse perspectives on FMs, and center affiliates collaborated on a giant report and open-sourced a substantial new library for training FMs. Perhaps most importantly, though, CRFM helped to create a rich, multidisciplinary community focused on developing and assessing FMs, and understanding their likely near-term and long-term impacts on individuals and society.

“The center was formed around this idea that we had a lot to offer, that academia is full of deep interdisciplinary expertise across not just people in AI, but also from the social sciences, economics, political science, the medical school,” Percy told me.

 

Weirded Out by GPT-3

Reflecting on year one, both Percy and I recalled the moment when we realized that FMs were going to have a major impact. We sensed change coming when ELMo, BERT, and GPT-2 appeared, but the really eye-opening moments were when we first got to interact with GPT-3. Both of us thought we’d quickly find its limits, and so we were “weirded out” by how flexible and systematic its behaviors seemed to be.

“There’s this idea of emergence that caught me and also, I think, many researchers, by surprise – that you can just train a language model, predict the next token on a tons of raw text, and then it can answer questions, it can summarize documents, have dialogue, translate, classify text, learn all sorts of different kind of pattern manipulation, format dates, and so on. It was just really eye-opening …” Percy noted.

These moments were the spark that led Percy to found CRFM. The initial plan was to create a GPT-3 clone, but this quickly led to a broader mission to “evaluate and benchmark and document what’s happening” with FMs as openly as possible and with as many voices as possible.

 

The “Best Practices” Document

We timed this podcast episode so that it could serve in part as a retrospective on Percy’s recent Twitter Spaces event with representatives from the FM-oriented startups AI21, Cohere, and OpenAI. Those startups recently collaborated on a document titled Best practices for deploying language models, which is the focus of the Twitter Spaces event.

The document strikes both of us as a step in the right direction, but our discussion homes in on the pressing questions that it raises but does not resolve – for example, who will decide what counts as misuse, who will be held responsible when an FM causes harm in the world, and what sort of legal infrastructure is likely to arise around FM-based technologies?

“If we are moving to a world where there’re going to be foundation models just offered up as services, as APIs, which seems like a likely future, then, yeah, the question is, what is the contract? When someone buys the service, what can it do and cannot do? That’s why benchmarking is so important. We need a nutrition label or a spec sheet for these objects that we’re selling or people are buying and using.”

 

 

CRFM’s Present and Future Goals

What role will CRFM play in resolving these questions? The recent blog post The time is now to develop community norms for the release of foundation models from Percy, Rishi Bommasani, Kathleen Creel, and Rob Reich includes concrete policy proposals relating to institutional or community review of FM releases, and the center is currently focused on developing an ecosystem for multifaceted benchmarking of FMs.

Might efforts like these eventually lead CRFM to become an independent auditor of systems that embed FMs? Will CRFM get directly involved in educating lawmakers and guiding public policy? How far outside of Stanford will it reach in the coming decades? Percy is unsure of the answers to these questions (and conveys a certain longing to return to pure research!), but his near-term plans deliberately mix research, applications, and social responsibility:

“You can think about three pillars,” he told me. “There’s social responsibility, where we’re trying to document, evaluate, and increase the transparency of all these models to build community norms. […] The second is technical advances. We haven’t talked about this as much, but there’s new ways of doing pre-training, there’s different architectures […]. And what CRFM can hope to do is provide the infrastructure so that students and researchers can leverage and make technical advances in those regards. And the third is applications, where we are collaborating with members of other departments and other schools on campus to look at the different data sets and the challenges associated with them.”

The longer term future of the center is harder to predict given the current incredible pace of change, but Percy does offer a clear vision for much wider participation:

“I hope that CRFM will be a respectable player in terms of shaping the norms around how these technologies are built. […] And I also think that it doesn’t make sense for a mission of that scale to be restricted only to Stanford. It should be something that’s really decentralized and governed in a much more decentralized way.”

 

Conclusion

The episode is peppered with fun and thought-provoking digressions into current research challenges, productive application areas, the nature of scaling, the complex place that Stanford occupies in the AI landscape, privacy, and other topical topics.

The episode is an excellent companion to an earlier CS224U episode with CRFM student affiliate Rishi Bommasani. Follow this link to listen to the episode, read the transcript, and check out the extensive show notes.

Chris Potts is a professor and chair of the Stanford Department of Linguistics, and a professor, by courtesy, of the Stanford Department of Computer Science.

Stanford HAI's mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.

 
Share
Link copied to clipboard!
Contributor(s)
Chris Potts
Related
  • Percy Liang
    Associate Professor of Computer Science, Stanford University | Director, Stanford Center for Research on Foundation Models | Senior Fellow, Stanford HAI
    Percy Liang

Related News

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

An AI Social Coach Is Teaching Empathy to People with Autism
Sarah Wells
Aug 13, 2025
News

A specialized chatbot named Noora is helping individuals with autism spectrum disorder practice their social skills on demand.

News

An AI Social Coach Is Teaching Empathy to People with Autism

Sarah Wells
HealthcareNatural Language ProcessingGenerative AIAug 13

A specialized chatbot named Noora is helping individuals with autism spectrum disorder practice their social skills on demand.

Social Science Moves In Silico
Katharine Miller
Jul 25, 2025
News

Despite limitations, advances in AI offer social science researchers the ability to simulate human subjects.

News

Social Science Moves In Silico

Katharine Miller
Generative AINatural Language ProcessingSciences (Social, Health, Biological, Physical)Jul 25

Despite limitations, advances in AI offer social science researchers the ability to simulate human subjects.