Skip to main content Skip to secondary navigation
Page Content
Image
Illustration of rocket ship

What if social media algorithms were designed to maximize societal value? How can healthcare AI offer explainable and equitable treatment decisions? Can we build socially responsible foundation models?

The Stanford Institute for Human-Centered Artificial Intelligence has awarded $2.75 million in funding through Hoffman-Yee Research Grants to six Stanford research teams. The teams are working to solve some of the most challenging problems in the field of AI and may be eligible for an additional $2 million each over the next two years.

“The Hoffman-Yee program was designed to fund ‘moonshot’ ideas that could be truly transformational in addressing scientific, technical, or societal challenges in the field of artificial intelligence,” says James Landay, Stanford HAI associate director and professor of computer science who oversees HAI’s grant program. “These scholars are working at the forefront of what is possible. It will be exciting to see what they do.”

The Hoffman-Yee Grant Program, launched in 2020 with support from philanthropists Reid Hoffman and Michelle Yee, is a multiyear initiative that seeks to fund innovative, breakthrough AI research with a multidisciplinary perspective. In its inaugural year, the program funded teams developing AI tutors, exoskeletons to help older adults and people with disabilities walk, and an AI time machine to study historical concepts, among others (read about prior winners here).

This year, HAI received more than 20 proposals from all seven Stanford schools. Proposals went through two rounds of review by Stanford faculty members and also underwent an ethical and societal review.

The awardees:

Improving Healthcare Decisions with AI

Many clinicians rely on risk stratification of patients to help guide care. Traditionally, they assess risk by relying on simple decision rules – such as whether a patient’s blood glucose meets target metrics. AI could transform this process through data-driven risk stratification and personalized intervention strategies. But for it to be a useful tool, it must be explainable, actionable, and equitable. This team of scholars, with expertise in computer science, statistics, pediatrics, and health policy, will create a framework for developing risk scores with these goals in mind. The framework will help clinicians close the loop between their knowledge and AI’s inferences. 

Encoding Societal Values in Social Media

Today’s social media AIs are oriented around individualist values – recommending a video, for example, that will elicit that individual user to like it. But what maximizes engagement might amplify antisocial behavior. In this project, experts in computer science, communication, psychology, and law develop a social media approach that encodes societal values into these AI models in hopes to create social media AI that benefits long-term community health, depolarization, or equity in voices. This project seeks to build foundational understanding of how human and AI models intertwine to produce the current suite of negative impacts, and to translate those insights into an alternative technical, social, and policy approach.

Building Technically Better, Socially Responsible Foundation Models

Foundation models are shifting the field of AI, with applications in biomedicine, medical imaging, law, and more. But these large general purpose models are also still in their infancy: They are technically immature and poorly understood, and they pose unknown social risk. This team, made up of experts in machine learning, law, political science, biomedicine, and robotics, aims to improve these models’ technical capabilities while also considering social concerns like privacy, homogenization, and intellectual property in an integrated way.

Bridging the Gap Between Vision and Language Models

A human can see a puddle of milk and infer that a roommate knocked over the carton. AI systems haven’t yet achieved human-level inferences. In this project, scholars in the fields of machine learning, law, political science, biomedicine, vision, and robotics will develop MARPLE, a computational framework that combines evidence from vision, audio, and language to produce a causal model of the world and develop human-understandable explanations of what happened. A system like MARPLE could improve home assistants, enable meaningful video analysis, support legal fact-finders in court, and even help us better understand human inference.

Improving Immigrant Integration

Where immigrants settle matters – finding employment, learning the language, and accessing education or healthcare can be more challenging in some locations than others. A poor fit can limit an immigrant's ability to integrate and prolong poverty. In many countries, refugee placement is quasi-random. In this project, scholars from political science, management science and engineering, statistics, and more will test an algorithmic matching tool that can be used by both governments and immigrants to identify the locations that give each individual the best chance of successful integration.

Advancing AI Hardware and Software

Large language models use a significant amount of energy for training, and that energy is increasing over time as the models get larger. Simultaneously, chips aren’t developing fast enough to handle these energy needs. Scholars from bioengineering, electrical engineering, computer science, and statistics will pursue software and hardware advances modeled after the human brain to create a more sustainable, efficient way to train large models. This approach would lower the cost of training models, mitigate unsustainable carbon emissions, and reduce dependency on cloud services. Three fundamental challenges will be addressed in this project: retrieval-enhanced models, sparse signaling, and 3D nanofabrication. 

Learn more about Hoffman-Yee Grants and our prior winners. See more details on this year’s winning teams.

 

More News Topics