Rishi Bommasani
Society Lead, Stanford Center for Research on Foundation Models; Ph.D. Candidate of Computer Science, Stanford University

Society Lead, Stanford Center for Research on Foundation Models; Ph.D. Candidate of Computer Science, Stanford University
Stanford HAI aggrees with and supports the U.S. AI Safety Institute’s (US AISI) draft guidelines for improving the safety, security, and trustworthiness of dual-use foundation models.
Stanford HAI aggrees with and supports the U.S. AI Safety Institute’s (US AISI) draft guidelines for improving the safety, security, and trustworthiness of dual-use foundation models.
New research adds precision to the debate on openness in AI.
New research adds precision to the debate on openness in AI.
This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.
This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.
New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.
New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.
New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.
New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.
This brief, produced in collaboration with Stanford RegLab, sheds light on the “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes.
This brief, produced in collaboration with Stanford RegLab, sheds light on the “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes.
America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.
America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.
America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.
America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.
Responses to NTIA's Request for Comment on AI Accountability Policy
Responses to NTIA's Request for Comment on AI Accountability Policy
Researchers develop a framework to capture the vast downstream impact and complex upstream dependencies that define the foundation model ecosystem.
Researchers develop a framework to capture the vast downstream impact and complex upstream dependencies that define the foundation model ecosystem.
As companies release new, more capable models, questions around deployment and transparency arise.
As companies release new, more capable models, questions around deployment and transparency arise.
In this brief, Stanford scholars introduce Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.
In this brief, Stanford scholars introduce Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.
Scholars benchmark 30 prominent language models across a wide range of scenarios and for a broad range of metrics to elucidate their capabilities and risks.
Scholars benchmark 30 prominent language models across a wide range of scenarios and for a broad range of metrics to elucidate their capabilities and risks.
Perspectives about the benefits and risks of release vary widely. We propose setting up a review board to develop community norms and encourage coordination on release for research access.
Perspectives about the benefits and risks of release vary widely. We propose setting up a review board to develop community norms and encourage coordination on release for research access.
In this response to a request for information issued by the National Science Foundation’s Networking and Information Technology Research and Development National Coordination Office (on behalf of the Office of Science and Technology Policy), scholars from Stanford HAI, CRFM, and RegLab urge policymakers to prioritize four areas of policy action in their AI Action Plan: 1) Promote open innovation as a strategic advantage for U.S. competitiveness; 2) Maintain U.S. AI leadership by promoting scientific innovation; 3) Craft evidence-based AI policy that protects Americans without stifling innovation; 4) Empower government leaders with resources and technical expertise to ensure a “whole-of-government” approach to AI governance.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
Rishi Bommasani, Society Lead at HAI's CRFM, discusses where AI is proving most dangerous, why openness is important, and how regulators are thinking about the open-close divide.