Machine Learning | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.

Machine Learning

Learn about the latest advances in machine learning that allow systems to learn and improve over time.

Dan Iancu, Sarah Billington, Antonio Skillicorn | Interpretable Machine Learning and Mixed Datasets for Predicting Child Labor in Ghana’s Cocoa Sector
SeminarMar 18, 202612:00 PM - 1:15 PM
March
18
2026

Child labor remains prevalent in Ghana’s cocoa sector and is associated with adverse educational and health outcomes for children.

Seminar

Dan Iancu, Sarah Billington, Antonio Skillicorn | Interpretable Machine Learning and Mixed Datasets for Predicting Child Labor in Ghana’s Cocoa Sector

Mar 18, 202612:00 PM - 1:15 PM

Child labor remains prevalent in Ghana’s cocoa sector and is associated with adverse educational and health outcomes for children.

Spatial Intelligence Is AI’s Next Frontier
TIME
Dec 11, 2025
Media Mention

"This is AI’s next frontier, and why 2025 was such a pivotal year," writes HAI Co-Director Fei-Fei Li.

Media Mention
Your browser does not support the video tag.

Spatial Intelligence Is AI’s Next Frontier

TIME
Computer VisionMachine LearningGenerative AIDec 11

"This is AI’s next frontier, and why 2025 was such a pivotal year," writes HAI Co-Director Fei-Fei Li.

Stories for the Future 2024
Isabelle Levent
Deep DiveMar 31, 2025
Research

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Research

Stories for the Future 2024

Isabelle Levent
Machine LearningGenerative AIArts, HumanitiesCommunications, MediaDesign, Human-Computer InteractionSciences (Social, Health, Biological, Physical)Deep DiveMar 31

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Improving Transparency in AI Language Models: A Holistic Evaluation
Rishi Bommasani, Daniel Zhang, Tony Lee, Percy Liang
Quick ReadFeb 28, 2023
Issue Brief

This brief introduces Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.

Issue Brief

Improving Transparency in AI Language Models: A Holistic Evaluation

Rishi Bommasani, Daniel Zhang, Tony Lee, Percy Liang
Machine LearningFoundation ModelsQuick ReadFeb 28

This brief introduces Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.

Joshua Salomon
Person
Person

Joshua Salomon

Machine LearningSciences (Social, Health, Biological, Physical)Oct 14
The Architects of AI Are TIME’s 2025 Person of the Year
TIME
Dec 11, 2025
Media Mention

HAI founding co-director Fei-Fei Li has been named one of TIME's 2025 Persons of the Year. From ImageNet to her advocacy for human-centered AI, Dr. Li has been a guiding light in the field.

Media Mention
Your browser does not support the video tag.

The Architects of AI Are TIME’s 2025 Person of the Year

TIME
Machine LearningComputer VisionDec 11

HAI founding co-director Fei-Fei Li has been named one of TIME's 2025 Persons of the Year. From ImageNet to her advocacy for human-centered AI, Dr. Li has been a guiding light in the field.

All Work Published on Machine Learning

Fei-Fei Li Wins Queen Elizabeth Prize for Engineering
Shana Lynch
Nov 07, 2025
News

The Stanford HAI co-founder is recognized for breakthroughs that propelled computer vision and deep learning, and for championing human-centered AI and industry innovation.

Fei-Fei Li Wins Queen Elizabeth Prize for Engineering

Shana Lynch
Nov 07, 2025

The Stanford HAI co-founder is recognized for breakthroughs that propelled computer vision and deep learning, and for championing human-centered AI and industry innovation.

Computer Vision
Machine Learning
News
The Promise and Perils of Artificial Intelligence in Advancing Participatory Science and Health Equity in Public Health
Abby C King, Zakaria N Doueiri, Ankita Kaulberg, Lisa Goldman Rosas
Feb 14, 2025
Research
Your browser does not support the video tag.

Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.

The Promise and Perils of Artificial Intelligence in Advancing Participatory Science and Health Equity in Public Health

Abby C King, Zakaria N Doueiri, Ankita Kaulberg, Lisa Goldman Rosas
Feb 14, 2025

Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.

Foundation Models
Generative AI
Machine Learning
Natural Language Processing
Sciences (Social, Health, Biological, Physical)
Healthcare
Your browser does not support the video tag.
Research
Promoting Algorithmic Fairness in Clinical Risk Prediction
Stephen R. Pfohl, Agata Foryciarz, Nigam Shah
Quick ReadSep 09, 2022
Policy Brief

This brief examines the debate on algorithmic fairness in clinical predictive algorithms and recommends paths to safer, more equitable healthcare AI.

Promoting Algorithmic Fairness in Clinical Risk Prediction

Stephen R. Pfohl, Agata Foryciarz, Nigam Shah
Quick ReadSep 09, 2022

This brief examines the debate on algorithmic fairness in clinical predictive algorithms and recommends paths to safer, more equitable healthcare AI.

Healthcare
Machine Learning
Ethics, Equity, Inclusion
Policy Brief
Justin Sonnenburg
Alex and Susie Algard Endowed Professor
Person

Justin Sonnenburg

Alex and Susie Algard Endowed Professor
Sciences (Social, Health, Biological, Physical)
Machine Learning
Person
Offline “Studying” Shrinks the Cost of Contextually Aware AI
Andrew Myers
Sep 29, 2025
News
Blue abstract background with light traveling through abstract flat cable illustrating data flow (3D render)

By having AI study a user’s context offline, researchers dramatically reduce the memory and cost required to make AI contextually aware.

Offline “Studying” Shrinks the Cost of Contextually Aware AI

Andrew Myers
Sep 29, 2025

By having AI study a user’s context offline, researchers dramatically reduce the memory and cost required to make AI contextually aware.

Foundation Models
Machine Learning
Blue abstract background with light traveling through abstract flat cable illustrating data flow (3D render)
News
Policy-Shaped Prediction: Avoiding Distractions in Model-Based Reinforcement Learning
Nicholas Haber, Miles Huston, Isaac Kauvar
Dec 13, 2024
Research
Your browser does not support the video tag.

Model-based reinforcement learning (MBRL) is a promising route to sampleefficient policy optimization. However, a known vulnerability of reconstructionbased MBRL consists of scenarios in which detailed aspects of the world are highly predictable, but irrelevant to learning a good policy. Such scenarios can lead the model to exhaust its capacity on meaningless content, at the cost of neglecting important environment dynamics. While existing approaches attempt to solve this problem, we highlight its continuing impact on leading MBRL methods —including DreamerV3 and DreamerPro — with a novel environment where background distractions are intricate, predictable, and useless for planning future actions. To address this challenge we develop a method for focusing the capacity of the world model through synergy of a pretrained segmentation model, a task-aware reconstruction loss, and adversarial learning. Our method outperforms a variety of other approaches designed to reduce the impact of distractors, and is an advance towards robust model-based reinforcement learning.

Policy-Shaped Prediction: Avoiding Distractions in Model-Based Reinforcement Learning

Nicholas Haber, Miles Huston, Isaac Kauvar
Dec 13, 2024

Model-based reinforcement learning (MBRL) is a promising route to sampleefficient policy optimization. However, a known vulnerability of reconstructionbased MBRL consists of scenarios in which detailed aspects of the world are highly predictable, but irrelevant to learning a good policy. Such scenarios can lead the model to exhaust its capacity on meaningless content, at the cost of neglecting important environment dynamics. While existing approaches attempt to solve this problem, we highlight its continuing impact on leading MBRL methods —including DreamerV3 and DreamerPro — with a novel environment where background distractions are intricate, predictable, and useless for planning future actions. To address this challenge we develop a method for focusing the capacity of the world model through synergy of a pretrained segmentation model, a task-aware reconstruction loss, and adversarial learning. Our method outperforms a variety of other approaches designed to reduce the impact of distractors, and is an advance towards robust model-based reinforcement learning.

Machine Learning
Foundation Models
Your browser does not support the video tag.
Research
1
2
3
4
5