Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Rishi Bommasani | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
peoplePolicy Fellow

Rishi Bommasani

Senior Research Scholar, Stanford HAI

External Bio
Latest Work
Response to OSTP's Request for Information on Accelerating the American Scientific Enterprise
Rishi Bommasani, John Etchemendy, Surya Ganguli, Daniel E. Ho, Guido Imbens, James Landay, Fei-Fei Li, Russell Wald
Quick ReadDec 26
response to request

Stanford scholars respond to a federal RFI on scientific discovery, calling for the government to support a new “team science” academic research model for AI-enabled discovery.

Transparency in AI is on the Decline
Rishi Bommasani, Kevin Klyman, Alexander Wan, Percy Liang
Dec 09
news
Your browser does not support the video tag.

A new study shows the AI industry is withholding key information.

Biden-era AI safety promises aren't holding up, and Apple's the weakest link
Fast Company
Aug 25
media mention

Rishi Bommasani, Society Lead at Stanford Center for Research on Foundation Models, speaks about a new analysis: "Do AI Companies Make Good on Voluntary Commitments to the White House?"

Share
Link copied to clipboard!

All Related

Response to OSTP’s Request for Information on the Development of an AI Action Plan
Caroline Meinhardt, Daniel Zhang, Rishi Bommasani, Jennifer King, Russell Wald, Percy Liang, Daniel E. Ho
Mar 17, 2025
response to request

Stanford scholars respond to a federal RFI on the development of an AI Action Plan, urging policymakers to promote open and scientific innovation, craft evidence-based AI policy, and empower government leaders.

Response to OSTP’s Request for Information on the Development of an AI Action Plan

Caroline Meinhardt, Daniel Zhang, Rishi Bommasani, Jennifer King, Russell Wald, Percy Liang, Daniel E. Ho
Mar 17, 2025

Stanford scholars respond to a federal RFI on the development of an AI Action Plan, urging policymakers to promote open and scientific innovation, craft evidence-based AI policy, and empower government leaders.

Regulation, Policy, Governance
response to request
Safeguarding Third-Party AI Research
Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Percy Liang, Peter Henderson
Quick ReadFeb 13, 2025
policy brief
Safeguarding third-party AI research

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.

Safeguarding Third-Party AI Research

Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Percy Liang, Peter Henderson
Quick ReadFeb 13, 2025

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.

Privacy, Safety, Security
Regulation, Policy, Governance
Safeguarding third-party AI research
policy brief
Are Open-Source AI Models Worth The Risk?
Tech Brew
Oct 31, 2024
media mention

Rishi Bommasani, Society Lead at HAI's CRFM, discusses where AI is proving most dangerous, why openness is important, and how regulators are thinking about the open-close divide. 

Are Open-Source AI Models Worth The Risk?

Tech Brew
Oct 31, 2024

Rishi Bommasani, Society Lead at HAI's CRFM, discusses where AI is proving most dangerous, why openness is important, and how regulators are thinking about the open-close divide. 

Foundation Models
media mention
Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models
Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Sep 09, 2024
response to request

Stanford scholars respond to a federal RFC on the U.S. AI Safety Institute’s draft guidelines for managing the misuse risk for dual-use foundation models.

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models

Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Sep 09, 2024

Stanford scholars respond to a federal RFC on the U.S. AI Safety Institute’s draft guidelines for managing the misuse risk for dual-use foundation models.

Regulation, Policy, Governance
Foundation Models
Privacy, Safety, Security
response to request
On the Societal Impact of Open Foundation Models
Sayash Kapoor, Rishi Bommasani, Daniel E. Ho, Percy Liang
and Arvind Narayanan
Feb 27, 2024
news

New research adds precision to the debate on openness in AI.

On the Societal Impact of Open Foundation Models

Sayash Kapoor, Rishi Bommasani, Daniel E. Ho, Percy Liang
and Arvind Narayanan
Feb 27, 2024

New research adds precision to the debate on openness in AI.

news
Considerations for Governing Open Foundation Models
Rishi Bommasani, Sayash Kapoor, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Daniel Zhang, Marietje Schaake, Daniel E. Ho, Arvind Narayanan, Percy Liang
Quick ReadDec 13, 2023
issue brief

This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.

Considerations for Governing Open Foundation Models

Rishi Bommasani, Sayash Kapoor, Kevin Klyman, Shayne Longpre, Ashwin Ramaswami, Daniel Zhang, Marietje Schaake, Daniel E. Ho, Arvind Narayanan, Percy Liang
Quick ReadDec 13, 2023

This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.

Foundation Models
issue brief
By the Numbers: Tracking The AI Executive Order
Caroline Meinhardt, Christie M. Lawrence, Lindsey A. Gailmard, Daniel Zhang, Rishi Bommasani, Rohini Kosoglu, Peter Henderson, Russell Wald, Daniel E. Ho
Nov 16, 2023
news

New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.

By the Numbers: Tracking The AI Executive Order

Caroline Meinhardt, Christie M. Lawrence, Lindsey A. Gailmard, Daniel Zhang, Rishi Bommasani, Rohini Kosoglu, Peter Henderson, Russell Wald, Daniel E. Ho
Nov 16, 2023

New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.

news
The AI Regulatory Alignment Problem
Neel Guha, Christie M. Lawrence, Lindsey A. Gailmard, Kit T. Rodolfa, Faiz Surani, Rishi Bommasani, Inioluwa Deborah Raji, Mariano-Florentino Cuéllar, Colleen Honigsberg, Percy Liang, Daniel E. Ho
Quick ReadNov 15, 2023
policy brief

This brief sheds light on the “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes.

The AI Regulatory Alignment Problem

Neel Guha, Christie M. Lawrence, Lindsey A. Gailmard, Kit T. Rodolfa, Faiz Surani, Rishi Bommasani, Inioluwa Deborah Raji, Mariano-Florentino Cuéllar, Colleen Honigsberg, Percy Liang, Daniel E. Ho
Quick ReadNov 15, 2023

This brief sheds light on the “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes.

Regulation, Policy, Governance
policy brief
Decoding the White House AI Executive Order’s Achievements
Rishi Bommasani, Christie M. Lawrence, Lindsey A. Gailmard, Caroline Meinhardt, Daniel Zhang, Peter Henderson, Russell Wald, Daniel E. Ho
Nov 02, 2023
news

America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.

Decoding the White House AI Executive Order’s Achievements

Rishi Bommasani, Christie M. Lawrence, Lindsey A. Gailmard, Caroline Meinhardt, Daniel Zhang, Peter Henderson, Russell Wald, Daniel E. Ho
Nov 02, 2023

America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.

news
Responses to NTIA's Request for Comment on AI Accountability Policy
Rishi Bommasani, Sayash Kapoor, Daniel Zhang, Arvind Narayanan, Percy Liang, Jennifer King
Jun 14, 2023
response to request

Stanford scholars respond to a federal RFC on AI accountability policy issued by the National Telecommunications and Information Administration (NTIA).

Responses to NTIA's Request for Comment on AI Accountability Policy

Rishi Bommasani, Sayash Kapoor, Daniel Zhang, Arvind Narayanan, Percy Liang, Jennifer King
Jun 14, 2023

Stanford scholars respond to a federal RFC on AI accountability policy issued by the National Telecommunications and Information Administration (NTIA).

Foundation Models
Privacy, Safety, Security
Regulation, Policy, Governance
response to request
Ecosystem Graphs: The Social Footprint of Foundation Models
Rishi Bommasani
Mar 29, 2023
news

Researchers develop a framework to capture the vast downstream impact and complex upstream dependencies that define the foundation model ecosystem.

Ecosystem Graphs: The Social Footprint of Foundation Models

Rishi Bommasani
Mar 29, 2023

Researchers develop a framework to capture the vast downstream impact and complex upstream dependencies that define the foundation model ecosystem.

Machine Learning
news
AI Spring? Four Takeaways from Major Releases in Foundation Models
Rishi Bommasani
Mar 17, 2023
news

As companies release new, more capable models, questions around deployment and transparency arise.

AI Spring? Four Takeaways from Major Releases in Foundation Models

Rishi Bommasani
Mar 17, 2023

As companies release new, more capable models, questions around deployment and transparency arise.

Natural Language Processing
Machine Learning
news
1
2