Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Percy Liang | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
peopleSenior Fellow,Faculty

Percy Liang

Associate Professor of Computer Science, Stanford University | Director, Stanford Center for Research on Foundation Models | Senior Fellow, Stanford HAI

Topics
Foundation Models
Generative AI
Machine Learning
Natural Language Processing
Percy Liang
External Bio

Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011) and the director of the Center for Research on Foundation Models (CRFM). He is currently focused on making foundation models (in particular, language models) more accessible through open-source and understandable through rigorous benchmarking. In the past, he has worked on many topics centered on machine learning and natural language processing, including robustness, interpretability, human interaction, learning theory, grounding, semantics, and reasoning. He is also a strong proponent of reproducibility through the creation of CodaLab Worksheets. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and paper awards at ACL, EMNLP, ICML, COLT, ISMIR, CHI, UIST, and RSS.

Share
Link copied to clipboard!

Latest Related to Percy Liang

policy brief

Simulating Human Behavior with AI Agents

Joon Sung Park, Carolyn Q. Zou, Aaron Shaw, Benjamin Mako Hill, Carrie J. Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, Michael S. Bernstein
Generative AIQuick ReadMay 20

This brief introduces a generative AI agent architecture that can simulate the attitudes of more than 1,000 real people in response to major social science survey questions.

response to request

Response to OSTP’s Request for Information on the Development of an AI Action Plan

Caroline Meinhardt, Daniel Zhang, Rishi Bommasani, Jennifer King, Russell Wald, Percy Liang, Daniel E. Ho
Regulation, Policy, GovernanceMar 17

Stanford scholars respond to a federal RFI on the development of an AI Action Plan, urging policymakers to promote open and scientific innovation, craft evidence-based AI policy, and empower government leaders.

policy brief
Safeguarding third-party AI research

Safeguarding Third-Party AI Research

Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Percy Liang, Peter Henderson
Privacy, Safety, SecurityRegulation, Policy, GovernanceQuick ReadFeb 13

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.

All Related

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models
Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Sep 09, 2024
response to request

Stanford scholars respond to a federal RFC on the U.S. AI Safety Institute’s draft guidelines for managing the misuse risk for dual-use foundation models.

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models

Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Sep 09, 2024

Stanford scholars respond to a federal RFC on the U.S. AI Safety Institute’s draft guidelines for managing the misuse risk for dual-use foundation models.

Regulation, Policy, Governance
Foundation Models
Privacy, Safety, Security
response to request
On the Societal Impact of Open Foundation Models
Sayash Kapoor, Rishi Bommasani, Daniel E. Ho, Percy Liang
and Arvind Narayanan
Feb 27, 2024
news

New research adds precision to the debate on openness in AI.

On the Societal Impact of Open Foundation Models

Sayash Kapoor, Rishi Bommasani, Daniel E. Ho, Percy Liang
and Arvind Narayanan
Feb 27, 2024

New research adds precision to the debate on openness in AI.

news
Considerations for Governing Open Foundation Models
Rishi Bommasani, Sayash Kapoor, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Daniel Zhang, Marietje Schaake, Daniel E. Ho, Arvind Narayanan, Percy Liang
Quick ReadDec 13, 2023
issue brief

This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.

Considerations for Governing Open Foundation Models

Rishi Bommasani, Sayash Kapoor, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Daniel Zhang, Marietje Schaake, Daniel E. Ho, Arvind Narayanan, Percy Liang
Quick ReadDec 13, 2023

This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.

Foundation Models
issue brief
Responses to OMB's Request for Comment on Draft Policy Guidance on Agency Use of AI
Mariano-Florentino Cuéllar, Daniel E. Ho, Jennifer Pahlka, Amy Perez, Gerald Ray, Kit T. Rodolfa, Percy Liang, Timothy O'Reilly, Todd Park, DJ Patil
Quick ReadNov 30, 2023
response to request

Stanford scholars respond to a federal RFC on the Office of Management and Budget’s (OMB) draft policy guidance on advancing governance, innovation, and risk management for agency use of AI.

Responses to OMB's Request for Comment on Draft Policy Guidance on Agency Use of AI

Mariano-Florentino Cuéllar, Daniel E. Ho, Jennifer Pahlka, Amy Perez, Gerald Ray, Kit T. Rodolfa, Percy Liang, Timothy O'Reilly, Todd Park, DJ Patil
Quick ReadNov 30, 2023

Stanford scholars respond to a federal RFC on the Office of Management and Budget’s (OMB) draft policy guidance on advancing governance, innovation, and risk management for agency use of AI.

Government, Public Administration
response to request
The AI Regulatory Alignment Problem
Neel Guha, Christie M. Lawrence, Lindsey A. Gailmard, Kit T. Rodolfa, Faiz Surani, Rishi Bommasani, Inioluwa Deborah Raji, Mariano-Florentino Cuéllar, Colleen Honigsberg, Percy Liang, Daniel E. Ho
Quick ReadNov 15, 2023
policy brief

This brief sheds light on the “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes.

The AI Regulatory Alignment Problem

Neel Guha, Christie M. Lawrence, Lindsey A. Gailmard, Kit T. Rodolfa, Faiz Surani, Rishi Bommasani, Inioluwa Deborah Raji, Mariano-Florentino Cuéllar, Colleen Honigsberg, Percy Liang, Daniel E. Ho
Quick ReadNov 15, 2023

This brief sheds light on the “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes.

Regulation, Policy, Governance
policy brief
Foundation Models and Copyright Questions
Peter Henderson, Xuechen Li, Dan Jurafsky, Tatsunori Hashimoto, Mark A. Lemley, Percy Liang
Quick ReadNov 02, 2023
policy brief

This brief warns that fair use may not fully shield U.S. foundation models trained on copyrighted data and calls for combined legal and technical safeguards to protect creators.

Foundation Models and Copyright Questions

Peter Henderson, Xuechen Li, Dan Jurafsky, Tatsunori Hashimoto, Mark A. Lemley, Percy Liang
Quick ReadNov 02, 2023

This brief warns that fair use may not fully shield U.S. foundation models trained on copyrighted data and calls for combined legal and technical safeguards to protect creators.

Foundation Models
Regulation, Policy, Governance
policy brief
New Horizons in Generative AI: Science, Creativity, and Society
conferenceOct 24, 20239:00 AM - 5:00 PM
October
24
2023
October
24
2023

New Horizons in Generative AI: Science, Creativity, and Society

Oct 24, 20239:00 AM - 5:00 PM
Sciences (Social, Health, Biological, Physical)
Whose Opinions Do Language Models Reflect?
Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, Tatsunori Hashimoto
Quick ReadSep 20, 2023
policy brief

This brief introduces a quantitative framework that allows policymakers to evaluate the behavior of language models to assess what kinds of opinions they reflect.

Whose Opinions Do Language Models Reflect?

Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, Tatsunori Hashimoto
Quick ReadSep 20, 2023

This brief introduces a quantitative framework that allows policymakers to evaluate the behavior of language models to assess what kinds of opinions they reflect.

Generative AI
Ethics, Equity, Inclusion
policy brief
Responses to NTIA's Request for Comment on AI Accountability Policy
Rishi Bommasani, Sayash Kapoor, Daniel Zhang, Arvind Narayanan, Percy Liang, Jennifer King
Jun 14, 2023
response to request

Stanford scholars respond to a federal RFC on AI accountability policy issued by the National Telecommunications and Information Administration (NTIA).

Responses to NTIA's Request for Comment on AI Accountability Policy

Rishi Bommasani, Sayash Kapoor, Daniel Zhang, Arvind Narayanan, Percy Liang, Jennifer King
Jun 14, 2023

Stanford scholars respond to a federal RFC on AI accountability policy issued by the National Telecommunications and Information Administration (NTIA).

Foundation Models
Privacy, Safety, Security
Regulation, Policy, Governance
response to request
Generative AI: Perspectives from Stanford HAI
Russ Altman, Erik Brynjolfsson, Michele Elam, Surya Ganguli, Daniel E. Ho, James Landay, Curtis Langlotz, Fei-Fei Li, Percy Liang, Christopher Manning, Peter Norvig, Rob Reich, Vanessa Parli
Deep DiveMar 01, 2023
Research

A diversity of perspectives from Stanford leaders in medicine, science, engineering, humanities, and the social sciences on how generative AI might affect their fields and our world

Generative AI: Perspectives from Stanford HAI

Russ Altman, Erik Brynjolfsson, Michele Elam, Surya Ganguli, Daniel E. Ho, James Landay, Curtis Langlotz, Fei-Fei Li, Percy Liang, Christopher Manning, Peter Norvig, Rob Reich, Vanessa Parli
Deep DiveMar 01, 2023

A diversity of perspectives from Stanford leaders in medicine, science, engineering, humanities, and the social sciences on how generative AI might affect their fields and our world

Generative AI
Research
Improving Transparency in AI Language Models: A Holistic Evaluation
Rishi Bommasani, Daniel Zhang, Tony Lee, Percy Liang
Quick ReadFeb 28, 2023
issue brief

This brief introduces Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.

Improving Transparency in AI Language Models: A Holistic Evaluation

Rishi Bommasani, Daniel Zhang, Tony Lee, Percy Liang
Quick ReadFeb 28, 2023

This brief introduces Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.

Machine Learning
Foundation Models
issue brief
Stanford CRFM Introduces PubMedGPT 2.7B
Michihiro Yasunaga, Tony Lee, Percy Liang
Elliot Bolton, David Hall, Chris Manning
Dec 15, 2022
news

The new 2.7B parameter language model trained on biomedical literature delivers an improved state of the art for medical question answering. 

Stanford CRFM Introduces PubMedGPT 2.7B

Michihiro Yasunaga, Tony Lee, Percy Liang
Elliot Bolton, David Hall, Chris Manning
Dec 15, 2022

The new 2.7B parameter language model trained on biomedical literature delivers an improved state of the art for medical question answering. 

Healthcare
news
1
2