Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
As AI use grows, how can we safeguard privacy, security, and data protection for individuals and organizations?
As AI technologies rapidly evolve, Professor Kochenderfer leads the charge in developing effective validation mechanisms to ensure safety in autonomous systems like vehicles and drones.
As AI technologies rapidly evolve, Professor Kochenderfer leads the charge in developing effective validation mechanisms to ensure safety in autonomous systems like vehicles and drones.
A key promise of machine learning is the ability to assist users with personal tasks.
A key promise of machine learning is the ability to assist users with personal tasks.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
Pointing to "white-hat" hacking, AI policy experts recommend a new system of third-party reporting and tracking of AI’s flaws.
Pointing to "white-hat" hacking, AI policy experts recommend a new system of third-party reporting and tracking of AI’s flaws.
This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.
This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.
After 23andMe announced that it’s headed to bankruptcy court, it’s unclear what happens to the mass of sensitive genetic data that it holds. Jen King, Policy Fellow at HAI comments on where this data could end up and be used for.
After 23andMe announced that it’s headed to bankruptcy court, it’s unclear what happens to the mass of sensitive genetic data that it holds. Jen King, Policy Fellow at HAI comments on where this data could end up and be used for.
In this response to the U.S. AI Safety Institute’s (US AISI) request for comment on its draft guidelines for managing the misuse risk for dual-use foundation models, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), and the Regulation, Evaluation, and Governance Lab (RegLab) urge the US AISI to strengthen its guidance on reproducible evaluations and third- party evaluations, as well as clarify guidance on post-deployment monitoring. They also encourage the institute to develop similar guidance for other actors in the foundation model supply chain and for non-misuse risks, while ensuring the continued open release of foundation models absent evidence of marginal risk.
In this response to the U.S. AI Safety Institute’s (US AISI) request for comment on its draft guidelines for managing the misuse risk for dual-use foundation models, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), and the Regulation, Evaluation, and Governance Lab (RegLab) urge the US AISI to strengthen its guidance on reproducible evaluations and third- party evaluations, as well as clarify guidance on post-deployment monitoring. They also encourage the institute to develop similar guidance for other actors in the foundation model supply chain and for non-misuse risks, while ensuring the continued open release of foundation models absent evidence of marginal risk.
HAI Policy Fellow Riana Pfefferkorn explains the different types of risk protection the private messaging app Signal can and cannot offer its users.
HAI Policy Fellow Riana Pfefferkorn explains the different types of risk protection the private messaging app Signal can and cannot offer its users.
In this response to the National Telecommunications and Information Administration’s NTIA) request for comment on dual use foundation AI models with widely available model weights, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), the Regulation, Evaluation, and Governance Lab (RegLab), and other institutions urge policymakers to amplify the benefits of open foundation models while further assessing the extent of their marginal risks.
In this response to the National Telecommunications and Information Administration’s NTIA) request for comment on dual use foundation AI models with widely available model weights, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), the Regulation, Evaluation, and Governance Lab (RegLab), and other institutions urge policymakers to amplify the benefits of open foundation models while further assessing the extent of their marginal risks.
Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.
Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.