Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?”
HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?”

Renowned leaders in AI, medicine, and ethics join interdisciplinary committee guiding the world’s leading resource on AI trends.

Renowned leaders in AI, medicine, and ethics join interdisciplinary committee guiding the world’s leading resource on AI trends.
Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.
Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.
Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.
Elon Musk was forced to put restrictions on X and its AI chatbot, Grok, after its image generator sparked outrage around the world. Grok created non-consensual sexualized images, prompting some countries to ban the bot. Liz Landers discussed Grok's troubles with Riana Pfefferkorn of the Stanford Institute for Human-Centered Artificial Intelligence.

Scholars develop a framework in collaboration with luxury goods multinational LVMH that lays out how large companies can flexibly deploy principles on the responsible use of AI across business units worldwide.

Scholars develop a framework in collaboration with luxury goods multinational LVMH that lays out how large companies can flexibly deploy principles on the responsible use of AI across business units worldwide.
Riana Pfefferkorn, Policy Fellow at HAI, urges immediate Congressional hearings to scope a legal safe harbor for AI-generated child sexual abuse materials following a recent scandal with Grok's newest generative image features.
Riana Pfefferkorn, Policy Fellow at HAI, urges immediate Congressional hearings to scope a legal safe harbor for AI-generated child sexual abuse materials following a recent scandal with Grok's newest generative image features.
HAI Policy Fellow Riana Pfefferkorn discusses the policy implications of the "mass digital undressing spree,” where the chatbot Grok responded to user prompts to remove the clothing from images of women and pose them in bikinis and to create "sexualized images of children" and post them on X.
HAI Policy Fellow Riana Pfefferkorn discusses the policy implications of the "mass digital undressing spree,” where the chatbot Grok responded to user prompts to remove the clothing from images of women and pose them in bikinis and to create "sexualized images of children" and post them on X.

These models generate plausible timelines from historical patterns; without calibration and auditing, their “probabilities” may not reflect reality.

These models generate plausible timelines from historical patterns; without calibration and auditing, their “probabilities” may not reflect reality.
HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.
HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.
As AI-hallucinated case citations flood the courts, judges have increased fines for attorneys who have cited fake cases. HAI Policy Fellow Riana Pfefferkorn hopes this will "make the firm sit up and pay better attention."
As AI-hallucinated case citations flood the courts, judges have increased fines for attorneys who have cited fake cases. HAI Policy Fellow Riana Pfefferkorn hopes this will "make the firm sit up and pay better attention."