Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
How can AI be designed to ensure fairness, transparency, and inclusivity?
After 23andMe announced that it’s headed to bankruptcy court, it’s unclear what happens to the mass of sensitive genetic data that it holds. Jen King, Policy Fellow at HAI comments on where this data could end up and be used for.
After 23andMe announced that it’s headed to bankruptcy court, it’s unclear what happens to the mass of sensitive genetic data that it holds. Jen King, Policy Fellow at HAI comments on where this data could end up and be used for.
HAI believes that all researchers and funding agencies have a responsibility to mitigate potential long-term harms from their research. HAI provides select examples, on-the-ground experiences, and perspectives from devising and administering an ethical reflection process for research. While HAI believes that ethical reflection can be integrated into any grantmaking process, the exact process can and should be tailored to the needs of the institution.
HAI believes that all researchers and funding agencies have a responsibility to mitigate potential long-term harms from their research. HAI provides select examples, on-the-ground experiences, and perspectives from devising and administering an ethical reflection process for research. While HAI believes that ethical reflection can be integrated into any grantmaking process, the exact process can and should be tailored to the needs of the institution.
AI presents an opportunity to reflect on society’s biases, but we need to pay close attention to both technical and social considerations, says Stanford HAI Faculty Affiliate Sanmi Koyejo.
AI presents an opportunity to reflect on society’s biases, but we need to pay close attention to both technical and social considerations, says Stanford HAI Faculty Affiliate Sanmi Koyejo.
This brief explores the complexities of accounting for race in clinical algorithms for evaluating kidney disease and the implications for tackling deep-seated health inequities.
This brief explores the complexities of accounting for race in clinical algorithms for evaluating kidney disease and the implications for tackling deep-seated health inequities.
Stanford HAI researchers create eight new AI benchmarks that could help developers reduce bias in AI models, potentially making them fairer and less likely to case harm.
Stanford HAI researchers create eight new AI benchmarks that could help developers reduce bias in AI models, potentially making them fairer and less likely to case harm.
This white paper, produced in collaboration with Black in AI, presents considerations for the Congressional Black Caucus’s policy initiatives by highlighting where AI holds the potential to deepen racial inequalities and where it can benefit Black communities.
This white paper, produced in collaboration with Black in AI, presents considerations for the Congressional Black Caucus’s policy initiatives by highlighting where AI holds the potential to deepen racial inequalities and where it can benefit Black communities.
New research tests large language models for consistency across diverse topics, revealing that while they handle neutral topics reliably, controversial issues lead to varied answers.
New research tests large language models for consistency across diverse topics, revealing that while they handle neutral topics reliably, controversial issues lead to varied answers.
In this brief, Stanford scholars present one of the first empirical investigations into AI ethics on the ground in private technology companies.
In this brief, Stanford scholars present one of the first empirical investigations into AI ethics on the ground in private technology companies.
The approach paves the way for faster and more accurate compliance with California’s anti-discrimination law.
The approach paves the way for faster and more accurate compliance with California’s anti-discrimination law.
In this brief, Stanford scholars test a variety of ordinary text prompts to examine how major text-to-image AI models encode a wide range of dangerous biases about demographic groups.
In this brief, Stanford scholars test a variety of ordinary text prompts to examine how major text-to-image AI models encode a wide range of dangerous biases about demographic groups.
Large language models exhibit alarming magnitudes of bias when generating stories about learners, often reinforcing harmful stereotypes
Large language models exhibit alarming magnitudes of bias when generating stories about learners, often reinforcing harmful stereotypes
Because tech industry ethics teams lack resources and authority, their effectiveness is spotty at best, according to a new study.
Because tech industry ethics teams lack resources and authority, their effectiveness is spotty at best, according to a new study.