Skip to main content Skip to secondary navigation
Page Content

Percy Liang on the Center for Research on Foundation Models' First and Next 30 Years

In this podcast, two scholars discuss being 'weirded out' by GPT-3, unresolved questions in building best practices, and the future of CRFM.

Image
Chris Potts and Percy Liang

Chris Potts, left, is Stanford professor and chair of the Department of Linguistics, and Stanford computer scientist Percy Liang helms the Center for Research on Foundation Models.

Center for Research on Foundation Models Director Percy Liang recently appeared on the CS224U podcast to help commemorate the one-year anniversary of the center. The discussion ranged widely over the origins of foundation models (FMs), the recent past and long-term future of CRFM, and the major ethical and societal questions facing FM research right how.

Visit the podcast page to listen to the full episode, read the transcript, and check out the extensive show notes.

 

Percy and I began our discussion by reflecting on CRFM’s first year. The center, born out of Stanford HAI, held an agenda-setting workshop that included many diverse perspectives on FMs, and center affiliates collaborated on a giant report and open-sourced a substantial new library for training FMs. Perhaps most importantly, though, CRFM helped to create a rich, multidisciplinary community focused on developing and assessing FMs, and understanding their likely near-term and long-term impacts on individuals and society.

“The center was formed around this idea that we had a lot to offer, that academia is full of deep interdisciplinary expertise across not just people in AI, but also from the social sciences, economics, political science, the medical school,” Percy told me.

Weirded Out by GPT-3

Reflecting on year one, both Percy and I recalled the moment when we realized that FMs were going to have a major impact. We sensed change coming when ELMoBERT, and GPT-2 appeared, but the really eye-opening moments were when we first got to interact with GPT-3. Both of us thought we’d quickly find its limits, and so we were “weirded out” by how flexible and systematic its behaviors seemed to be.

“There’s this idea of emergence that caught me and also, I think, many researchers, by surprise – that you can just train a language model, predict the next token on a tons of raw text, and then it can answer questions, it can summarize documents, have dialogue, translate, classify text, learn all sorts of different kind of pattern manipulation, format dates, and so on. It was just really eye-opening …” Percy noted.

These moments were the spark that led Percy to found CRFM. The initial plan was to create a GPT-3 clone, but this quickly led to a broader mission to “evaluate and benchmark and document what’s happening” with FMs as openly as possible and with as many voices as possible.

The “Best Practices” Document

We timed this podcast episode so that it could serve in part as a retrospective on Percy’s recent Twitter Spaces event with representatives from the FM-oriented startups AI21Cohere, and OpenAI. Those startups recently collaborated on a document titled Best practices for deploying language models, which is the focus of the Twitter Spaces event.

The document strikes both of us as a step in the right direction, but our discussion homes in on the pressing questions that it raises but does not resolve – for example, who will decide what counts as misuse, who will be held responsible when an FM causes harm in the world, and what sort of legal infrastructure is likely to arise around FM-based technologies?

“If we are moving to a world where there’re going to be foundation models just offered up as services, as APIs, which seems like a likely future, then, yeah, the question is, what is the contract? When someone buys the service, what can it do and cannot do? That’s why benchmarking is so important. We need a nutrition label or a spec sheet for these objects that we’re selling or people are buying and using.”

 

CRFM’s Present and Future Goals

What role will CRFM play in resolving these questions? The recent blog post The time is now to develop community norms for the release of foundation models from Percy, Rishi BommasaniKathleen Creel, and Rob Reich includes concrete policy proposals relating to institutional or community review of FM releases, and the center is currently focused on developing an ecosystem for multifaceted benchmarking of FMs.

Might efforts like these eventually lead CRFM to become an independent auditor of systems that embed FMs? Will CRFM get directly involved in educating lawmakers and guiding public policy? How far outside of Stanford will it reach in the coming decades? Percy is unsure of the answers to these questions (and conveys a certain longing to return to pure research!), but his near-term plans deliberately mix research, applications, and social responsibility:

“You can think about three pillars,” he told me. “There’s social responsibility, where we’re trying to document, evaluate, and increase the transparency of all these models to build community norms. […] The second is technical advances. We haven’t talked about this as much, but there’s new ways of doing pre-training, there’s different architectures […]. And what CRFM can hope to do is provide the infrastructure so that students and researchers can leverage and make technical advances in those regards. And the third is applications, where we are collaborating with members of other departments and other schools on campus to look at the different data sets and the challenges associated with them.”

The longer term future of the center is harder to predict given the current incredible pace of change, but Percy does offer a clear vision for much wider participation:

“I hope that CRFM will be a respectable player in terms of shaping the norms around how these technologies are built. […] And I also think that it doesn’t make sense for a mission of that scale to be restricted only to Stanford. It should be something that’s really decentralized and governed in a much more decentralized way.”

Conclusion

The episode is peppered with fun and thought-provoking digressions into current research challenges, productive application areas, the nature of scaling, the complex place that Stanford occupies in the AI landscape, privacy, and other topical topics.

The episode is an excellent companion to an earlier CS224U episode with CRFM student affiliate Rishi Bommasani. Follow this link to listen to the episode, read the transcript, and check out the extensive show notes.

Chris Potts is a professor and chair of the Stanford Department of Linguistics, and a professor, by courtesy, of the Stanford Department of Computer Science.

Stanford HAI's mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.

 

More News Topics