Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Stanford HAI Awards $2.75M in Hoffman-Yee Grants | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
newsAnnouncement

Stanford HAI Awards $2.75M in Hoffman-Yee Grants

Date
August 18, 2022
Topics
Healthcare
Machine Learning

This year’s winners propose innovative, bold ideas pushing the boundaries of artificial intelligence.

What if social media algorithms were designed to maximize societal value? How can healthcare AI offer explainable and equitable treatment decisions? Can we build socially responsible foundation models?

The Stanford Institute for Human-Centered Artificial Intelligence has awarded $2.75 million in funding through Hoffman-Yee Research Grants to six Stanford research teams. The teams are working to solve some of the most challenging problems in the field of AI and may be eligible for an additional $2 million each over the next two years.

“The Hoffman-Yee program was designed to fund ‘moonshot’ ideas that could be truly transformational in addressing scientific, technical, or societal challenges in the field of artificial intelligence,” says James Landay, Stanford HAI associate director and professor of computer science who oversees HAI’s grant program. “These scholars are working at the forefront of what is possible. It will be exciting to see what they do.”

The Hoffman-Yee Grant Program, launched in 2020 with support from philanthropists Reid Hoffman and Michelle Yee, is a multiyear initiative that seeks to fund innovative, breakthrough AI research with a multidisciplinary perspective. In its inaugural year, the program funded teams developing AI tutors, exoskeletons to help older adults and people with disabilities walk, and an AI time machine to study historical concepts, among others (read about prior winners here).

This year, HAI received more than 20 proposals from all seven Stanford schools. Proposals went through two rounds of review by Stanford faculty members and also underwent an ethical and societal review.

The awardees:

Improving Healthcare Decisions with AI

Many clinicians rely on risk stratification of patients to help guide care. Traditionally, they assess risk by relying on simple decision rules – such as whether a patient’s blood glucose meets target metrics. AI could transform this process through data-driven risk stratification and personalized intervention strategies. But for it to be a useful tool, it must be explainable, actionable, and equitable. This team of scholars, with expertise in computer science, statistics, pediatrics, and health policy, will create a framework for developing risk scores with these goals in mind. The framework will help clinicians close the loop between their knowledge and AI’s inferences. 

Encoding Societal Values in Social Media

Today’s social media AIs are oriented around individualist values – recommending a video, for example, that will elicit that individual user to like it. But what maximizes engagement might amplify antisocial behavior. In this project, experts in computer science, communication, psychology, and law develop a social media approach that encodes societal values into these AI models in hopes to create social media AI that benefits long-term community health, depolarization, or equity in voices. This project seeks to build foundational understanding of how human and AI models intertwine to produce the current suite of negative impacts, and to translate those insights into an alternative technical, social, and policy approach.

Building Technically Better, Socially Responsible Foundation Models

Foundation models are shifting the field of AI, with applications in biomedicine, medical imaging, law, and more. But these large general purpose models are also still in their infancy: They are technically immature and poorly understood, and they pose unknown social risk. This team, made up of experts in machine learning, law, political science, biomedicine, and robotics, aims to improve these models’ technical capabilities while also considering social concerns like privacy, homogenization, and intellectual property in an integrated way.

Bridging the Gap Between Vision and Language Models

A human can see a puddle of milk and infer that a roommate knocked over the carton. AI systems haven’t yet achieved human-level inferences. In this project, scholars in the fields of machine learning, law, political science, biomedicine, vision, and robotics will develop MARPLE, a computational framework that combines evidence from vision, audio, and language to produce a causal model of the world and develop human-understandable explanations of what happened. A system like MARPLE could improve home assistants, enable meaningful video analysis, support legal fact-finders in court, and even help us better understand human inference.

Improving Immigrant Integration

Where immigrants settle matters – finding employment, learning the language, and accessing education or healthcare can be more challenging in some locations than others. A poor fit can limit an immigrant's ability to integrate and prolong poverty. In many countries, refugee placement is quasi-random. In this project, scholars from political science, management science and engineering, statistics, and more will test an algorithmic matching tool that can be used by both governments and immigrants to identify the locations that give each individual the best chance of successful integration.

Advancing AI Hardware and Software

Large language models use a significant amount of energy for training, and that energy is increasing over time as the models get larger. Simultaneously, chips aren’t developing fast enough to handle these energy needs. Scholars from bioengineering, electrical engineering, computer science, and statistics will pursue software and hardware advances modeled after the human brain to create a more sustainable, efficient way to train large models. This approach would lower the cost of training models, mitigate unsustainable carbon emissions, and reduce dependency on cloud services. Three fundamental challenges will be addressed in this project: retrieval-enhanced models, sparse signaling, and 3D nanofabrication. 

Learn more about Hoffman-Yee Grants and our prior winners. See more details on this year’s winning teams.

 

Share
Link copied to clipboard!
Authors
  • headshot
    Shana Lynch
Related
  • Closed
    Hoffman-Yee Research Grants
    Call for proposals will open in Winter 2025

    The Hoffman-Yee Research Grants are designed to address significant scientific, technical, or societal challenges requiring an interdisciplinary team and a bold approach.

    These grants are made possible by a gift from philanthropists Reid Hoffman and Michelle Yee.

Related News

Exploring the Dangers of AI in Mental Health Care
Sarah Wells
Jun 11, 2025
News
Young woman holds up phone to her face

A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.

News
Young woman holds up phone to her face

Exploring the Dangers of AI in Mental Health Care

Sarah Wells
HealthcareGenerative AIJun 11

A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.

Digital Twins Offer Insights into Brains Struggling with Math — and Hope for Students
Andrew Myers
Jun 06, 2025
News

Researchers used artificial intelligence to analyze the brain scans of students solving math problems, offering the first-ever peek into the neuroscience of math disabilities.

News

Digital Twins Offer Insights into Brains Struggling with Math — and Hope for Students

Andrew Myers
Machine LearningSciences (Social, Health, Biological, Physical)Jun 06

Researchers used artificial intelligence to analyze the brain scans of students solving math problems, offering the first-ever peek into the neuroscience of math disabilities.

Better Benchmarks for Safety-Critical AI Applications
Nikki Goth Itoi
May 27, 2025
News
Business graph digital concept

Stanford researchers investigate why models often fail in edge-case scenarios.

News
Business graph digital concept

Better Benchmarks for Safety-Critical AI Applications

Nikki Goth Itoi
Machine LearningMay 27

Stanford researchers investigate why models often fail in edge-case scenarios.