Skip to main content Skip to secondary navigation
Page Content

Four Research Teams Awarded New Hoffman-Yee Grant Funding

This year's research spans foundation models, health care algorithms, social values in social media, and improved chip technology.

Stanford HAI is pleased to announce that four research teams will receive up to $2 million each to fund their groundbreaking, multidisciplinary research in artificial intelligence.

The Hoffman Yee Grants are designed to fund Stanford teams with research spanning HAI’s key areas of focus: understanding the human impact of AI, augmenting human capabilities, and developing AI technologies inspired by human intelligence. These grants are made possible by a gift from philanthropists Reid Hoffman and Michelle Yee.

Six teams had been selected to receive the initial grant of $500,000 in 2022. After presenting their research-to-date at the annual Hoffman-Yee Symposium, four of the teams were selected for additional funding.

Call for proposals for 2024 now open! Apply for the newest round of Hoffman Yee Grants


“These teams represent the interdisciplinarity that is core here at Stanford HAI, and we believe the results of these projects could play a significant role in defining future work in AI from academia to industry, government, healthcare and civil society,”  said Vanessa Parli, HAI Director of Research Programs.

To date, Stanford HAI has awarded $17.75 million in Hoffman Yee Grants to research teams spanning intelligent wearables, curious AI agents, refugee matching systems, and causal understanding tools.

Below, learn about the winning teams, or watch the Hoffman Yee Symposium for more details on these innovative research projects.

EAE Scores: A Framework for Explainable, Actionable and Equitable Risk Scores for Healthcare Decisions

Many clinicians rely on risk stratification of patients to help guide care. Traditionally, they assess risk by relying on simple decision rules – such as whether a patient’s blood glucose meets target metrics. AI could transform this process through data-driven risk stratification and personalized intervention strategies. But for it to be a useful tool, it must be explainable, actionable, and equitable. Read more about this team’s project in Type 1 diabetes in its Stanford 4T Study (teamwork, technology, targets tight control) and its TIDE platform.

Foundation Models: Integrating Technical Advances, Social Responsibility, and Applications

Foundation models are shifting the field of AI, with applications in biomedicine, medical imaging, law, and more. But these large general purpose models are also still in their infancy: They are technically immature and poorly understood, and they pose unknown social risk. This team, made up of experts in machine learning, law, political science, biomedicine, and robotics, aims to improve these models’ technical capabilities while also considering social concerns like privacy, homogenization, and intellectual property in an integrated way. Read more about this team’s work expanding the capabilities of foundation models, its new datasets in law and robotics, and its research in transparency and evaluation

Tuning Our Algorithmic Amplifiers: Encoding Societal Values into Social Media Algorithms

Today’s social media AIs are oriented around individualist values – recommending a video, for example, that will elicit that individual user to like it. But what maximizes engagement might amplify antisocial behavior. In this project, experts in computer science, communication, psychology, and law are developing a social media approach that encodes societal values into these AI models in hopes to create social media AI that benefits long-term community health, depolarization, or equity in voices. This project seeks to build foundational understanding of how human and AI models intertwine to produce the current suite of negative impacts, and to translate those insights into an alternative technical, social, and policy approach. Read a blog post from the team about this project.

Dendritic Computation for Knowledge Systems

Large language models use a significant amount of energy for training, and that energy is increasing over time as the models get larger. Simultaneously, chips aren’t developing fast enough to handle these energy needs. Scholars from bioengineering, electrical engineering, computer science, and statistics are pursuing software and hardware advances modeled after the human brain to create a more sustainable, efficient way to train large models. This approach would lower the cost of training models, mitigate unsustainable carbon emissions, and reduce dependency on cloud services. Three fundamental challenges are being addressed in this project: retrieval-enhanced models, sparse signaling, and 3D nanofabrication. Read more about some of this team’s recent work.

Interested in applying for a Hoffman Yee Grant? We’re currently calling for proposals for 2024. Letters of intent are due by Jan. 29, 2024. 

Learn more about the Hoffman Yee Grant program.