Rishi Bommasani
Society Lead, Stanford Center for Research on Foundation Models; Ph.D. Candidate of Computer Science, Stanford University

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Society Lead, Stanford Center for Research on Foundation Models; Ph.D. Candidate of Computer Science, Stanford University
America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.
New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.
Stanford HAI aggrees with and supports the U.S. AI Safety Institute’s (US AISI) draft guidelines for improving the safety, security, and trustworthiness of dual-use foundation models.
While the AI alignment problem—the notion that machine and human values may not be aligned—has arisen as an impetus for regulation, what is less recognized is that hurried calls to regulate create their own regulatory alignment problem, where proposals may distract, fail, or backfire. In this brief, we shed light on this “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes. Some proposals may fail to address the problems they set out to solve due to technical or institutional constraints, while others may even worsen those problems or introduce entirely new harms.
While the AI alignment problem—the notion that machine and human values may not be aligned—has arisen as an impetus for regulation, what is less recognized is that hurried calls to regulate create their own regulatory alignment problem, where proposals may distract, fail, or backfire. In this brief, we shed light on this “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes. Some proposals may fail to address the problems they set out to solve due to technical or institutional constraints, while others may even worsen those problems or introduce entirely new harms.
Responses to NTIA's Request for Comment on AI Accountability Policy
Responses to NTIA's Request for Comment on AI Accountability Policy
In this brief, Stanford scholars introduce Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.
In this brief, Stanford scholars introduce Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.
This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.
This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.
Perspectives about the benefits and risks of release vary widely. We propose setting up a review board to develop community norms and encourage coordination on release for research access.
Perspectives about the benefits and risks of release vary widely. We propose setting up a review board to develop community norms and encourage coordination on release for research access.
As companies release new, more capable models, questions around deployment and transparency arise.
As companies release new, more capable models, questions around deployment and transparency arise.
America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.
America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.
New research adds precision to the debate on openness in AI.
New research adds precision to the debate on openness in AI.
Scholars benchmark 30 prominent language models across a wide range of scenarios and for a broad range of metrics to elucidate their capabilities and risks.
Scholars benchmark 30 prominent language models across a wide range of scenarios and for a broad range of metrics to elucidate their capabilities and risks.
Researchers develop a framework to capture the vast downstream impact and complex upstream dependencies that define the foundation model ecosystem.
Researchers develop a framework to capture the vast downstream impact and complex upstream dependencies that define the foundation model ecosystem.
New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.
New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
Rishi Bommasani, Society Lead at HAI's CRFM, discusses where AI is proving most dangerous, why openness is important, and how regulators are thinking about the open-close divide.
Rishi Bommasani, Society Lead at HAI's CRFM, discusses where AI is proving most dangerous, why openness is important, and how regulators are thinking about the open-close divide.