Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News

In the real world, things change fast. Stanford researchers invented the “curious replay” training method based on studying mice to help AI agents successfully explore and adapt to changing surroundings.

In the real world, things change fast. Stanford researchers invented the “curious replay” training method based on studying mice to help AI agents successfully explore and adapt to changing surroundings.

At a recent meeting with the president, HAI leaders urged U.S. investment and leadership to unlock AI's potential.

At a recent meeting with the president, HAI leaders urged U.S. investment and leadership to unlock AI's potential.

A Stanford team has developed Sophia, a new way to optimize the pretraining of large language models that’s twice as fast as current approaches.

A Stanford team has developed Sophia, a new way to optimize the pretraining of large language models that’s twice as fast as current approaches.

Scholars outline the best possible uses for these tools and the path to deployment.

Scholars outline the best possible uses for these tools and the path to deployment.

Because tech industry ethics teams lack resources and authority, their effectiveness is spotty at best, according to a new study.

Because tech industry ethics teams lack resources and authority, their effectiveness is spotty at best, according to a new study.


The Multi-VALUE framework achieves consistent performance across dozens of English dialects.

The Multi-VALUE framework achieves consistent performance across dozens of English dialects.
Through Project Liberty’s Institute, Stanford will collaborate with Georgetown and Sciences Po to shape the technical, ethical, and governance infrastructure of emerging technologies and the next-generation internet.
Through Project Liberty’s Institute, Stanford will collaborate with Georgetown and Sciences Po to shape the technical, ethical, and governance infrastructure of emerging technologies and the next-generation internet.

Generative AI claims to produce new language and images, but when those ideas are based on copyrighted material, who gets the credit? A new paper from Stanford University looks for answers.

Generative AI claims to produce new language and images, but when those ideas are based on copyrighted material, who gets the credit? A new paper from Stanford University looks for answers.

A new collaboration between Stanford HAI and the Mayo Clinic will help two scholars explore the use of AI in neurology and cardiology.

A new collaboration between Stanford HAI and the Mayo Clinic will help two scholars explore the use of AI in neurology and cardiology.

Supervised methods for training medical image models aren’t scalable. A new review highlights the potential of self-supervised learning.

Supervised methods for training medical image models aren’t scalable. A new review highlights the potential of self-supervised learning.

Researchers develop a new tool to measure how well popular large language models align with public opinion to evaluate bias in chatbots.

Researchers develop a new tool to measure how well popular large language models align with public opinion to evaluate bias in chatbots.

Don’t put faith in detectors that are “unreliable and easily gamed,” says scholar.

Don’t put faith in detectors that are “unreliable and easily gamed,” says scholar.

According to a new study, 50% of generative search engine responses lack supportive citations, and 25% of the citations provided are off point.

According to a new study, 50% of generative search engine responses lack supportive citations, and 25% of the citations provided are off point.

These powerful machine learning algorithms sit at the core of many generative AI tools today.

These powerful machine learning algorithms sit at the core of many generative AI tools today.

In her course called Human-Centered NLP, Yang challenges students to think beyond technical performance or accuracy.

In her course called Human-Centered NLP, Yang challenges students to think beyond technical performance or accuracy.

According to Stanford researchers, large language models are not greater than the sum of their parts.

According to Stanford researchers, large language models are not greater than the sum of their parts.