Four Research Teams Awarded New Hoffman-Yee Grant Funding | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

newsAnnouncement

Four Research Teams Awarded New Hoffman-Yee Grant Funding

Date
November 13, 2023
Your browser does not support the video tag.

This year's research spans foundation models, health care algorithms, social values in social media, and improved chip technology.

Stanford HAI is pleased to announce that four research teams will receive up to $2 million each to fund their groundbreaking, multidisciplinary research in artificial intelligence.

The Hoffman Yee Grants are designed to fund Stanford teams with research spanning HAI’s key areas of focus: understanding the human impact of AI, augmenting human capabilities, and developing AI technologies inspired by human intelligence. These grants are made possible by a gift from philanthropists Reid Hoffman and Michelle Yee.

Six teams had been selected to receive the initial grant of $500,000 in 2022. After presenting their research-to-date at the annual Hoffman-Yee Symposium, four of the teams were selected for additional funding.

Call for proposals for 2024 now open! Apply for the newest round of Hoffman Yee Grants

 

“These teams represent the interdisciplinarity that is core here at Stanford HAI, and we believe the results of these projects could play a significant role in defining future work in AI from academia to industry, government, healthcare and civil society,”  said Vanessa Parli, HAI Director of Research Programs.

To date, Stanford HAI has awarded $17.75 million in Hoffman Yee Grants to research teams spanning intelligent wearables, curious AI agents, refugee matching systems, and causal understanding tools.

Below, learn about the winning teams, or watch the Hoffman Yee Symposium for more details on these innovative research projects.

EAE Scores: A Framework for Explainable, Actionable and Equitable Risk Scores for Healthcare Decisions

Many clinicians rely on risk stratification of patients to help guide care. Traditionally, they assess risk by relying on simple decision rules – such as whether a patient’s blood glucose meets target metrics. AI could transform this process through data-driven risk stratification and personalized intervention strategies. But for it to be a useful tool, it must be explainable, actionable, and equitable. Read more about this team’s project in Type 1 diabetes in its Stanford 4T Study (teamwork, technology, targets tight control) and its TIDE platform.

Foundation Models: Integrating Technical Advances, Social Responsibility, and Applications

Foundation models are shifting the field of AI, with applications in biomedicine, medical imaging, law, and more. But these large general purpose models are also still in their infancy: They are technically immature and poorly understood, and they pose unknown social risk. This team, made up of experts in machine learning, law, political science, biomedicine, and robotics, aims to improve these models’ technical capabilities while also considering social concerns like privacy, homogenization, and intellectual property in an integrated way. Read more about this team’s work expanding the capabilities of foundation models, its new datasets in law and robotics, and its research in transparency and evaluation. 

Tuning Our Algorithmic Amplifiers: Encoding Societal Values into Social Media Algorithms

Today’s social media AIs are oriented around individualist values – recommending a video, for example, that will elicit that individual user to like it. But what maximizes engagement might amplify antisocial behavior. In this project, experts in computer science, communication, psychology, and law are developing a social media approach that encodes societal values into these AI models in hopes to create social media AI that benefits long-term community health, depolarization, or equity in voices. This project seeks to build foundational understanding of how human and AI models intertwine to produce the current suite of negative impacts, and to translate those insights into an alternative technical, social, and policy approach. Read a blog post from the team about this project.

Dendritic Computation for Knowledge Systems

Large language models use a significant amount of energy for training, and that energy is increasing over time as the models get larger. Simultaneously, chips aren’t developing fast enough to handle these energy needs. Scholars from bioengineering, electrical engineering, computer science, and statistics are pursuing software and hardware advances modeled after the human brain to create a more sustainable, efficient way to train large models. This approach would lower the cost of training models, mitigate unsustainable carbon emissions, and reduce dependency on cloud services. Three fundamental challenges are being addressed in this project: retrieval-enhanced models, sparse signaling, and 3D nanofabrication. Read more about some of this team’s recent work.

Interested in applying for a Hoffman Yee Grant? We’re currently calling for proposals for 2024. Letters of intent are due by Jan. 29, 2024. 

Learn more about the Hoffman Yee Grant program.

Share
Link copied to clipboard!
Related
  • Closed
    Hoffman-Yee Research Grants

    The Hoffman-Yee Research Grants are designed to address significant scientific, technical, or societal challenges requiring an interdisciplinary team and a bold approach.

    These grants are made possible by a gift from philanthropists Reid Hoffman and Michelle Yee.

Related News

‘We are Stanford’: Open Minds Event Honors Staff
Stanford Report
Mar 31, 2026
Media Mention

Stanford University President Jon Levin highlights Stanford’s pivotal role in shaping the future of AI, pointing to Stanford HAI as a leader in advancing its ethical development and deployment.

Media Mention
Your browser does not support the video tag.

‘We are Stanford’: Open Minds Event Honors Staff

Stanford Report
Ethics, Equity, InclusionMar 31

Stanford University President Jon Levin highlights Stanford’s pivotal role in shaping the future of AI, pointing to Stanford HAI as a leader in advancing its ethical development and deployment.

Who Decides How America Uses AI in War?
Curtis Langlotz, Amy Zegart, Michele Elam, Jennifer King, Russ Altman
Mar 30, 2026
News
image of drones connected by digital net

As artificial intelligence becomes central to national security, experts grapple with a technology that remains unpredictable, unregulated, and increasingly powerful.

News
image of drones connected by digital net

Who Decides How America Uses AI in War?

Curtis Langlotz, Amy Zegart, Michele Elam, Jennifer King, Russ Altman
Mar 30

As artificial intelligence becomes central to national security, experts grapple with a technology that remains unpredictable, unregulated, and increasingly powerful.

Stop Telling AI Your Secrets - 5 Reasons Why, And What To Do If You Already Overshared
ZD Net
Mar 25, 2026
Media Mention

"The ultimate problem is that you just can't control where the information goes, and it could leak out in ways that you just don't anticipate," says HAI Privacy and Data Policy Fellow Jennifer King.

Media Mention
Your browser does not support the video tag.

Stop Telling AI Your Secrets - 5 Reasons Why, And What To Do If You Already Overshared

ZD Net
Regulation, Policy, GovernanceGenerative AIMar 25

"The ultimate problem is that you just can't control where the information goes, and it could leak out in ways that you just don't anticipate," says HAI Privacy and Data Policy Fellow Jennifer King.