Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Machine Learning | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

Machine Learning

Learn about the latest advances in machine learning that allow systems to learn and improve over time.

Tom Mitchell | The History of Machine Learning
Feb 23, 202612:00 PM - 1:00 PM
February
23
2026

How did we get to today’s technology which now supports a trillion dollar AI industry? What were the key scientific breakthroughs? What were the surprises and dead-ends along the way...

Event

Tom Mitchell | The History of Machine Learning

Feb 23, 202612:00 PM - 1:00 PM

How did we get to today’s technology which now supports a trillion dollar AI industry? What were the key scientific breakthroughs? What were the surprises and dead-ends along the way...

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Stories for the Future 2024
Isabelle Levent
Deep DiveMar 31, 2025
Research

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Research

Stories for the Future 2024

Isabelle Levent
Machine LearningGenerative AIArts, HumanitiesCommunications, MediaDesign, Human-Computer InteractionSciences (Social, Health, Biological, Physical)Deep DiveMar 31

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Improving Transparency in AI Language Models: A Holistic Evaluation
Rishi Bommasani, Daniel Zhang, Tony Lee, Percy Liang
Quick ReadFeb 28, 2023
Issue Brief

This brief introduces Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.

Issue Brief

Improving Transparency in AI Language Models: A Holistic Evaluation

Rishi Bommasani, Daniel Zhang, Tony Lee, Percy Liang
Machine LearningFoundation ModelsQuick ReadFeb 28

This brief introduces Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.

Joshua Salomon
Person
Person

Joshua Salomon

Machine LearningSciences (Social, Health, Biological, Physical)Oct 14
Dan Iancu & Antonio Skillicorn | Interpretable Machine Learning and Mixed Datasets for Predicting Child Labor in Ghana’s Cocoa Sector
SeminarMar 18, 202612:00 PM - 1:15 PM
March
18
2026

Child labor remains prevalent in Ghana’s cocoa sector and is associated with adverse educational and health outcomes for children.

Seminar

Dan Iancu & Antonio Skillicorn | Interpretable Machine Learning and Mixed Datasets for Predicting Child Labor in Ghana’s Cocoa Sector

Mar 18, 202612:00 PM - 1:15 PM

Child labor remains prevalent in Ghana’s cocoa sector and is associated with adverse educational and health outcomes for children.

All Work Published on Machine Learning

Stanford’s Yejin Choi & Axios’ Ina Fried
Axios
Jan 19, 2026
Media Mention

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Stanford’s Yejin Choi & Axios’ Ina Fried

Axios
Jan 19, 2026

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Energy, Environment
Machine Learning
Generative AI
Ethics, Equity, Inclusion
Media Mention
The Promise and Perils of Artificial Intelligence in Advancing Participatory Science and Health Equity in Public Health
Abby C King, Zakaria N Doueiri, Ankita Kaulberg, Lisa Goldman Rosas
Feb 14, 2025
Research
Your browser does not support the video tag.

Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.

The Promise and Perils of Artificial Intelligence in Advancing Participatory Science and Health Equity in Public Health

Abby C King, Zakaria N Doueiri, Ankita Kaulberg, Lisa Goldman Rosas
Feb 14, 2025

Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.

Foundation Models
Generative AI
Machine Learning
Natural Language Processing
Sciences (Social, Health, Biological, Physical)
Healthcare
Your browser does not support the video tag.
Research
Promoting Algorithmic Fairness in Clinical Risk Prediction
Stephen R. Pfohl, Agata Foryciarz, Nigam Shah
Quick ReadSep 09, 2022
Policy Brief

This brief examines the debate on algorithmic fairness in clinical predictive algorithms and recommends paths to safer, more equitable healthcare AI.

Promoting Algorithmic Fairness in Clinical Risk Prediction

Stephen R. Pfohl, Agata Foryciarz, Nigam Shah
Quick ReadSep 09, 2022

This brief examines the debate on algorithmic fairness in clinical predictive algorithms and recommends paths to safer, more equitable healthcare AI.

Healthcare
Machine Learning
Ethics, Equity, Inclusion
Policy Brief
Justin Sonnenburg
Alex and Susie Algard Endowed Professor
Person

Justin Sonnenburg

Alex and Susie Algard Endowed Professor
Sciences (Social, Health, Biological, Physical)
Machine Learning
Person
Spatial Intelligence Is AI’s Next Frontier
TIME
Dec 11, 2025
Media Mention

"This is AI’s next frontier, and why 2025 was such a pivotal year," writes HAI Co-Director Fei-Fei Li.

Spatial Intelligence Is AI’s Next Frontier

TIME
Dec 11, 2025

"This is AI’s next frontier, and why 2025 was such a pivotal year," writes HAI Co-Director Fei-Fei Li.

Computer Vision
Machine Learning
Generative AI
Media Mention
Policy-Shaped Prediction: Avoiding Distractions in Model-Based Reinforcement Learning
Nicholas Haber, Miles Huston, Isaac Kauvar
Dec 13, 2024
Research
Your browser does not support the video tag.

Model-based reinforcement learning (MBRL) is a promising route to sampleefficient policy optimization. However, a known vulnerability of reconstructionbased MBRL consists of scenarios in which detailed aspects of the world are highly predictable, but irrelevant to learning a good policy. Such scenarios can lead the model to exhaust its capacity on meaningless content, at the cost of neglecting important environment dynamics. While existing approaches attempt to solve this problem, we highlight its continuing impact on leading MBRL methods —including DreamerV3 and DreamerPro — with a novel environment where background distractions are intricate, predictable, and useless for planning future actions. To address this challenge we develop a method for focusing the capacity of the world model through synergy of a pretrained segmentation model, a task-aware reconstruction loss, and adversarial learning. Our method outperforms a variety of other approaches designed to reduce the impact of distractors, and is an advance towards robust model-based reinforcement learning.

Policy-Shaped Prediction: Avoiding Distractions in Model-Based Reinforcement Learning

Nicholas Haber, Miles Huston, Isaac Kauvar
Dec 13, 2024

Model-based reinforcement learning (MBRL) is a promising route to sampleefficient policy optimization. However, a known vulnerability of reconstructionbased MBRL consists of scenarios in which detailed aspects of the world are highly predictable, but irrelevant to learning a good policy. Such scenarios can lead the model to exhaust its capacity on meaningless content, at the cost of neglecting important environment dynamics. While existing approaches attempt to solve this problem, we highlight its continuing impact on leading MBRL methods —including DreamerV3 and DreamerPro — with a novel environment where background distractions are intricate, predictable, and useless for planning future actions. To address this challenge we develop a method for focusing the capacity of the world model through synergy of a pretrained segmentation model, a task-aware reconstruction loss, and adversarial learning. Our method outperforms a variety of other approaches designed to reduce the impact of distractors, and is an advance towards robust model-based reinforcement learning.

Machine Learning
Foundation Models
Your browser does not support the video tag.
Research
1
2
3
4
5