Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Four Research Teams Awarded New Hoffman-Yee Grant Funding | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
newsAnnouncement

Four Research Teams Awarded New Hoffman-Yee Grant Funding

Date
November 13, 2023
Your browser does not support the video tag.

This year's research spans foundation models, health care algorithms, social values in social media, and improved chip technology.

Stanford HAI is pleased to announce that four research teams will receive up to $2 million each to fund their groundbreaking, multidisciplinary research in artificial intelligence.

The Hoffman Yee Grants are designed to fund Stanford teams with research spanning HAI’s key areas of focus: understanding the human impact of AI, augmenting human capabilities, and developing AI technologies inspired by human intelligence. These grants are made possible by a gift from philanthropists Reid Hoffman and Michelle Yee.

Six teams had been selected to receive the initial grant of $500,000 in 2022. After presenting their research-to-date at the annual Hoffman-Yee Symposium, four of the teams were selected for additional funding.

Call for proposals for 2024 now open! Apply for the newest round of Hoffman Yee Grants

 

“These teams represent the interdisciplinarity that is core here at Stanford HAI, and we believe the results of these projects could play a significant role in defining future work in AI from academia to industry, government, healthcare and civil society,”  said Vanessa Parli, HAI Director of Research Programs.

To date, Stanford HAI has awarded $17.75 million in Hoffman Yee Grants to research teams spanning intelligent wearables, curious AI agents, refugee matching systems, and causal understanding tools.

Below, learn about the winning teams, or watch the Hoffman Yee Symposium for more details on these innovative research projects.

EAE Scores: A Framework for Explainable, Actionable and Equitable Risk Scores for Healthcare Decisions

Many clinicians rely on risk stratification of patients to help guide care. Traditionally, they assess risk by relying on simple decision rules – such as whether a patient’s blood glucose meets target metrics. AI could transform this process through data-driven risk stratification and personalized intervention strategies. But for it to be a useful tool, it must be explainable, actionable, and equitable. Read more about this team’s project in Type 1 diabetes in its Stanford 4T Study (teamwork, technology, targets tight control) and its TIDE platform.

Foundation Models: Integrating Technical Advances, Social Responsibility, and Applications

Foundation models are shifting the field of AI, with applications in biomedicine, medical imaging, law, and more. But these large general purpose models are also still in their infancy: They are technically immature and poorly understood, and they pose unknown social risk. This team, made up of experts in machine learning, law, political science, biomedicine, and robotics, aims to improve these models’ technical capabilities while also considering social concerns like privacy, homogenization, and intellectual property in an integrated way. Read more about this team’s work expanding the capabilities of foundation models, its new datasets in law and robotics, and its research in transparency and evaluation. 

Tuning Our Algorithmic Amplifiers: Encoding Societal Values into Social Media Algorithms

Today’s social media AIs are oriented around individualist values – recommending a video, for example, that will elicit that individual user to like it. But what maximizes engagement might amplify antisocial behavior. In this project, experts in computer science, communication, psychology, and law are developing a social media approach that encodes societal values into these AI models in hopes to create social media AI that benefits long-term community health, depolarization, or equity in voices. This project seeks to build foundational understanding of how human and AI models intertwine to produce the current suite of negative impacts, and to translate those insights into an alternative technical, social, and policy approach. Read a blog post from the team about this project.

Dendritic Computation for Knowledge Systems

Large language models use a significant amount of energy for training, and that energy is increasing over time as the models get larger. Simultaneously, chips aren’t developing fast enough to handle these energy needs. Scholars from bioengineering, electrical engineering, computer science, and statistics are pursuing software and hardware advances modeled after the human brain to create a more sustainable, efficient way to train large models. This approach would lower the cost of training models, mitigate unsustainable carbon emissions, and reduce dependency on cloud services. Three fundamental challenges are being addressed in this project: retrieval-enhanced models, sparse signaling, and 3D nanofabrication. Read more about some of this team’s recent work.

Interested in applying for a Hoffman Yee Grant? We’re currently calling for proposals for 2024. Letters of intent are due by Jan. 29, 2024. 

Learn more about the Hoffman Yee Grant program.

Share
Link copied to clipboard!
Related
  • Closed
    Hoffman-Yee Research Grants
    Call for proposals will open in Winter 2025

    The Hoffman-Yee Research Grants are designed to address significant scientific, technical, or societal challenges requiring an interdisciplinary team and a bold approach.

    These grants are made possible by a gift from philanthropists Reid Hoffman and Michelle Yee.

Related News

The Art of the Automated Negotiation
Matty Smith
Jun 18, 2025
News

Different AI agents have wildly different negotiation skills. If we outsource these tasks to agents, we may need to bring the "best" AI agent to the digital table.

News

The Art of the Automated Negotiation

Matty Smith
AutomationGenerative AIEconomy, MarketsJun 18

Different AI agents have wildly different negotiation skills. If we outsource these tasks to agents, we may need to bring the "best" AI agent to the digital table.

How Language Bias Persists in Scientific Publishing Despite AI Tools
Scott Hadly
Jun 16, 2025
News

Stanford researchers highlight the ongoing challenges of language discrimination in academic publishing, revealing that AI tools may not be the solution for non-native speakers.

News

How Language Bias Persists in Scientific Publishing Despite AI Tools

Scott Hadly
Ethics, Equity, InclusionGenerative AIJun 16

Stanford researchers highlight the ongoing challenges of language discrimination in academic publishing, revealing that AI tools may not be the solution for non-native speakers.

Exploring the Dangers of AI in Mental Health Care
Sarah Wells
Jun 11, 2025
News
Young woman holds up phone to her face

A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.

News
Young woman holds up phone to her face

Exploring the Dangers of AI in Mental Health Care

Sarah Wells
HealthcareGenerative AIJun 11

A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.