As AI use grows, how can we safeguard privacy, security, and data protection for individuals and organizations?
In an era when information is treated as a form of power and self-knowledge an unqualified good, the value of what remains unknown is often overlooked.
In an era when information is treated as a form of power and self-knowledge an unqualified good, the value of what remains unknown is often overlooked.
After 23andMe announced that it’s headed to bankruptcy court, it’s unclear what happens to the mass of sensitive genetic data that it holds. Jen King, Policy Fellow at HAI comments on where this data could end up and be used for.
After 23andMe announced that it’s headed to bankruptcy court, it’s unclear what happens to the mass of sensitive genetic data that it holds. Jen King, Policy Fellow at HAI comments on where this data could end up and be used for.
A key promise of machine learning is the ability to assist users with personal tasks.
A key promise of machine learning is the ability to assist users with personal tasks.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
HAI Policy Fellow Riana Pfefferkorn explains the different types of risk protection the private messaging app Signal can and cannot offer its users.
HAI Policy Fellow Riana Pfefferkorn explains the different types of risk protection the private messaging app Signal can and cannot offer its users.
Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.
Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.
This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.
This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.
New research tests large language models for consistency across diverse topics, revealing that while they handle neutral topics reliably, controversial issues lead to varied answers.
New research tests large language models for consistency across diverse topics, revealing that while they handle neutral topics reliably, controversial issues lead to varied answers.
This white paper explores the current and future impact of privacy and data protection legislation on AI development and provides recommendations for mitigating privacy harms in an AI era.
This white paper explores the current and future impact of privacy and data protection legislation on AI development and provides recommendations for mitigating privacy harms in an AI era.
At a recent Stanford-MIT-Princeton workshop, experts highlight the need for legal protections, standardized evaluation practices, and better terminology to support third-party AI evaluations.
At a recent Stanford-MIT-Princeton workshop, experts highlight the need for legal protections, standardized evaluation practices, and better terminology to support third-party AI evaluations.