Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News

Scholars outline the best possible uses for these tools and the path to deployment.

Scholars outline the best possible uses for these tools and the path to deployment.

Because tech industry ethics teams lack resources and authority, their effectiveness is spotty at best, according to a new study.

Because tech industry ethics teams lack resources and authority, their effectiveness is spotty at best, according to a new study.


The Multi-VALUE framework achieves consistent performance across dozens of English dialects.

The Multi-VALUE framework achieves consistent performance across dozens of English dialects.
Through Project Liberty’s Institute, Stanford will collaborate with Georgetown and Sciences Po to shape the technical, ethical, and governance infrastructure of emerging technologies and the next-generation internet.
Through Project Liberty’s Institute, Stanford will collaborate with Georgetown and Sciences Po to shape the technical, ethical, and governance infrastructure of emerging technologies and the next-generation internet.

Generative AI claims to produce new language and images, but when those ideas are based on copyrighted material, who gets the credit? A new paper from Stanford University looks for answers.

Generative AI claims to produce new language and images, but when those ideas are based on copyrighted material, who gets the credit? A new paper from Stanford University looks for answers.

A new collaboration between Stanford HAI and the Mayo Clinic will help two scholars explore the use of AI in neurology and cardiology.

A new collaboration between Stanford HAI and the Mayo Clinic will help two scholars explore the use of AI in neurology and cardiology.

Supervised methods for training medical image models aren’t scalable. A new review highlights the potential of self-supervised learning.

Supervised methods for training medical image models aren’t scalable. A new review highlights the potential of self-supervised learning.

Researchers develop a new tool to measure how well popular large language models align with public opinion to evaluate bias in chatbots.

Researchers develop a new tool to measure how well popular large language models align with public opinion to evaluate bias in chatbots.