How can AI be designed to ensure fairness, transparency, and inclusivity?
Stanford researchers highlight the ongoing challenges of language discrimination in academic publishing, revealing that AI tools may not be the solution for non-native speakers.
Stanford researchers highlight the ongoing challenges of language discrimination in academic publishing, revealing that AI tools may not be the solution for non-native speakers.
In this response to the National Science Foundation’s (NSF) request for information related to research ethics, a group of scholars affiliated with Stanford’s Ethics and Society Review (ESR) and Stanford HAI share lessons drawn from their five years of experience operating the ESR ethical reflection process as a requirement for HAI research grants. They make the case for promoting ethical and societal reflection within NSF’s grantmaking and highlight the common ethical issues that arise as part of AI research reviews.
In this response to the National Science Foundation’s (NSF) request for information related to research ethics, a group of scholars affiliated with Stanford’s Ethics and Society Review (ESR) and Stanford HAI share lessons drawn from their five years of experience operating the ESR ethical reflection process as a requirement for HAI research grants. They make the case for promoting ethical and societal reflection within NSF’s grantmaking and highlight the common ethical issues that arise as part of AI research reviews.
Pointing to "white-hat" hacking, AI policy experts recommend a new system of third-party reporting and tracking of AI’s flaws.
Pointing to "white-hat" hacking, AI policy experts recommend a new system of third-party reporting and tracking of AI’s flaws.
This policy brief, developed in collaboration with Stanford Health Policy, explores the complexities of accounting for race in clinical algorithms for evaluating kidney disease and the implications for tackling deep-seated health inequities.
This policy brief, developed in collaboration with Stanford Health Policy, explores the complexities of accounting for race in clinical algorithms for evaluating kidney disease and the implications for tackling deep-seated health inequities.
The 2025 AI Index highlights key developments over the past year, including major gains in model performance, record levels of private investment, new regulatory action, and growing real-world adoption.
The 2025 AI Index highlights key developments over the past year, including major gains in model performance, record levels of private investment, new regulatory action, and growing real-world adoption.
This white paper, produced in collaboration with Black in AI, presents considerations for the Congressional Black Caucus’s policy initiatives by highlighting where AI holds the potential to deepen racial inequalities and where it can benefit Black communities.
This white paper, produced in collaboration with Black in AI, presents considerations for the Congressional Black Caucus’s policy initiatives by highlighting where AI holds the potential to deepen racial inequalities and where it can benefit Black communities.
"The AI Index equips policymakers, researchers, and the public with the data they need to make informed decisions — and to ensure AI is developed with human-centered values at its core," says Russell Wald, Executive Director of Stanford HAI and Steering Committee member of the AI Index.
"The AI Index equips policymakers, researchers, and the public with the data they need to make informed decisions — and to ensure AI is developed with human-centered values at its core," says Russell Wald, Executive Director of Stanford HAI and Steering Committee member of the AI Index.
In this brief, Stanford scholars present one of the first empirical investigations into AI ethics on the ground in private technology companies.
In this brief, Stanford scholars present one of the first empirical investigations into AI ethics on the ground in private technology companies.
After 23andMe announced that it’s headed to bankruptcy court, it’s unclear what happens to the mass of sensitive genetic data that it holds. Jen King, Policy Fellow at HAI comments on where this data could end up and be used for.
After 23andMe announced that it’s headed to bankruptcy court, it’s unclear what happens to the mass of sensitive genetic data that it holds. Jen King, Policy Fellow at HAI comments on where this data could end up and be used for.
In this brief, Stanford scholars test a variety of ordinary text prompts to examine how major text-to-image AI models encode a wide range of dangerous biases about demographic groups.
In this brief, Stanford scholars test a variety of ordinary text prompts to examine how major text-to-image AI models encode a wide range of dangerous biases about demographic groups.
AI presents an opportunity to reflect on society’s biases, but we need to pay close attention to both technical and social considerations, says Stanford HAI Faculty Affiliate Sanmi Koyejo.
AI presents an opportunity to reflect on society’s biases, but we need to pay close attention to both technical and social considerations, says Stanford HAI Faculty Affiliate Sanmi Koyejo.
Stanford HAI researchers create eight new AI benchmarks that could help developers reduce bias in AI models, potentially making them fairer and less likely to case harm.
Stanford HAI researchers create eight new AI benchmarks that could help developers reduce bias in AI models, potentially making them fairer and less likely to case harm.