2022 Hoffman-Yee Grant Recipients
Six Stanford research teams have received funding to solve some of the most challenging problems in AI.
Tuning Our Algorithmic Amplifiers: Encoding Societal Values into Social Media Algorithms
Artificial intelligence algorithms underpin social media, influencing everything from feed ranking to moderation to disinformation classification. These algorithms maximize each user's individual experience—as predicted through likes, retweets, and other behavioral data—which can harm societal values such as wellbeing, social capital, mitigating harm to minoritized groups, democracy, and maintaining pro-social norms. How can we encode societal values into these algorithms without sacrificing the core of what can make social media compelling? Our project will develop intertwined social scientific, engineering, and policy answers to these questions. Social scientific research will help us understand the societal values at play, how the algorithms themselves influence those values or how they might be creating feedback loops that undercut such values. Engineering research will develop new participatory models for collective determination of how to embed these societal values in the social media AI (e.g., feed ranking), how to measure the impact of AI decisions on these values from sparse observable data, and how to concretely embed these (potentially conflicting) values into the AIs. Policy proposals will articulate how the societal values in such algorithms ought to be decided upon, and the kinds of regulation and oversight that social media algorithms ought to have. Underlying each of these threads is a measurement challenge at scale: so, we will recruit a large participant panel that reaches across political, gender, racial, and cultural identities. These consented participants will become a longitudinal panel for interviews, surveys, data collection, and evaluation of the interventions that we develop, enabling a scale of measurement and testing that is typically out of reach for research. Through this work, we hope to paint a future where social media AIs aid us in achieving our societal goals rather than undermine them.
NAME | ROLE | SCHOOL | DEPARTMENTS |
---|---|---|---|
Michael Bernstein | Main PI | Engineering | Computer Science |
Angele Christin | Co-PI | Humanities and Sciences | Communication |
Jeffrey Hancock | Co-PI | Humanities and Sciences | Communication |
Tatsunori Hashimoto | Co-PI | Engineering | Computer Science |
Nathaniel Persily | Co-PI | Law School | Law School |
Jeanne Tsai | Co-PI | Humanities and Sciences | Psychology |
Johan Ugander | Co-PI | Engineering | Management Science and Engineering |
Back to top of page |
MARPLE: Explaining what happened through multi-modal simulation
Humans have a remarkable ability to figure out what happened. From a puddle of milk, we can infer that our roommate must have forgotten to close the fridge, that the milk toppled over, and splashed on the floor. When humans infer what happened, they combine evidence from multiple modalities to do so. For example, jury members are often presented with a large variety of pieces of evidence that may include images of the crime scene, surveillance videos, audio recordings, and various kinds of testimony from different witnesses. The jury member’s task is to take in all these sources of information and get at the truth of what happened. Current AI systems fail to match the inferential capacities of humans. While great strides have been made in developing models that understand and produce language, as well as models that process visual input, we believe that a key component is missing: we need AI systems that integrate different sources of evidence into a causal model of the world.
In this project, we will take major steps to bridge the gap between vision and language models in AI. We will develop MARPLE (named after the detective Miss Marple) – a computational framework that combines evidence from vision, audio, and language to produce human-understandable explanations of what happened. Current research in cognitive science shows that the human capacity to draw flexible inferences about the physical world, and about each other, is best explained by assuming that people construct mental causal models of the domain, and that they use these models to simulate different counterfactual possibilities. To give explanations of what happened, and to say what caused what, the capacity to go beyond what actually happened and simulate alternative possibilities is critical. An AI system capable of producing explanations from multiple sources of evidence has enormous potential impact. It will improve home assistants, enable meaningful video analysis, support legal fact-finders in court, and help advance our understanding of human inference.
NAME | ROLE | SCHOOL | DEPARTMENTS |
---|---|---|---|
Tobias Gerstenberg | Main PI | Humanities and Sciences | Psychology |
Chelsea Finn | Co-PI | Engineering | Computer Science |
Noah Goodman | Co-PI | Humanities and Sciences | Psychology |
Thomas Icard | Co-PI | Humanities and Sciences | Philosophy |
Robert MacCoun | Co-PI | Law School | Law School |
Jiajun Wu | Co-PI | Engineering | Computer Science |
Back to top of page |
Foundation Models: Integrating Technical Advances, Social Responsibility, and Applications
We are entering a new era of artificial intelligence driven by foundation models such as GPT-3, which are trained on broad data (at immense scale using self-supervision) and can be adapted to a wide range of downstream tasks. These models demonstrate strong capabilities, learning rich representations of data and being able to generate human-quality text and images, and even exhibiting emergent phenomena such as in-context learning. Moreover, they represent a paradigm shift in how AI operates: a huge amount of resources are pooled into large-scale data collection and training of foundation models (like infrastructure), which then provide an indispensable resource for almost any downstream application.
At the same time, foundation models are still in their infancy. They are technically immature and poorly-understood. At the same time, given the immense commercial incentives to deploy them, they pose new social risks that must be studied and managed. Finally, they have so far been applied to popular Internet applications that the companies developing foundation models care about. However, there is a rich array of other applications across many fields such as law and medicine that could benefit from foundation models while posing new research challenges.
To address these deficiencies of foundation models, we propose improving the technical capabilities while being sensitive to social responsibility while staying grounded to real-world applications. We have assembled a diverse team with deep multidisciplinary expertise across many areas such as machine learning, law, political science, biomedicine, vision, and robotics. This team has already demonstrated the ability to work together to produce the 200+ page report on foundation models. We plan to improve our understanding of training objectives, study the role of data on model behavior, develop novel model architectures based on structured state space models, diffusion, and retrieval. We will investigate privacy and intellectual property implications, the effect of homogenization, develop frameworks for recourse when downstream systems fail. Finally, we will leverage foundation models in applications in biomedicine, law, and robotics. Overall, we believe this multi-faceted, integrated approach will be key to improving the foundations of powerful future AI systems.
NAME | ROLE | SCHOOL | DEPARTMENTS |
---|---|---|---|
Percy Liang | Main PI | Engineering | Computer Science |
Russ B. Altman | Co-PI | Engineering | Bioengineering |
Jeannette Bohg | Co-PI | Engineering | Computer Science |
Akshay Chaudhari | Co-PI | Medicine | Radiology |
Chelsea Finn | Co-PI | Engineering | Computer Science |
Tatsunori Hashimoto | Co-PI | Engineering | Computer Science |
Dan E. Ho | Co-PI | Law | Law School |
Fei-Fei Li | Co-PI | Engineering | Computer Science |
Tengyu Ma | Co-PI | Engineering | Computer Science |
Christopher Manning | Co-PI | Engineering | Computer Science |
Christopher Re | Co-PI | Engineering | Computer Science |
Rob Reich | Co-PI | Humanities and Sciences | Political Science |
Dorsa Sadigh | Co-PI | Engineering | Computer Science |
Matei Zaharia | Co-PI | Engineering | Computer Science |
Back to top of page |
EAE Scores: A Framework for Explainable, Actionable and Equitable Risk Scores for Healthcare Decisions
Many clinical systems rely on risk stratification of patients to guide care and select interventions. For example, risk scores may be calculated for everything from cardiovascular outcomes to hospital readmission. Three clinical settings we consider are remote monitoring of type 1 diabetes (T1D) patients, opioid overdose, and seizure prediction. Traditionally, the cycle of assessing risk and treating patients is informed by clinical training relying on simple decision rules, such as whether a patient’s blood glucose met target metrics. AI provides the potential to transform this process through data-driven risk stratification and personalized intervention strategies. However, as a clinical decision support system, such methods must be (i) explainable – providing factors that meaningfully contribute to a clinician’s reasoning, (ii) actionable – leading to insights directly informing intervention decisions, and (iii) equitable – ensuring the scores neither perpetuate patterns of inequality nor induce negative feedback loops.
In the proposed work, we will create EAE Scores, a framework for developing explainable, actionable, and equitable risk scores for healthcare decisions. EAE Scores will both produce new forms of introspection through explainability, and enable providers to close-the-loop between their knowledge of the intervention decisions and the AI’s inferences. Furthermore, EAE Scores will provide a systematic approach to incorporate equitable decisions in every step of the development process. The proposed outcomes of the work are threefold: (i) general AI algorithms and methods, which have applicability beyond our clinical settings, (ii) robust open-source tools, allowing others to create and deploy more explainable, actionable and equitable risk scores, and (iii) direct improvements in clinical outcomes for T1D, epilepsy and opioid overdose risk.
NAME | ROLE | SCHOOL | DEPARTMENTS |
---|---|---|---|
Carlos Ernesto Guestrin | Main PI | Engineering | Computer Science |
Carissa Carter | Co-PI | Engineering | d.school |
Emily Fox | Co-PI | Humanities and Sciences | Statistics, Computer Science (courtesy) |
Ramesh Johari | Co-PI | Engineering | Management Science and Engineering, Electrical Engineering (courtesy), Computer Science (courtesy) |
David Maahs | Co-PI | Medicine | Pediatrics |
Priya Prahalad | Co-PI | Medicine | Pediatrics |
Sherri Rose | Co-PI | Medicine | Health Policy |
David Scheinker | Co-PI | Medicine | Pediatrics |
Back to top of page |
Matching Newcomers To Places: Leveraging Human-Centered AI to Improve Immigrant Integration
The place where immigrants settle within a host country has a powerful impact on their lives. This destination can be a stepping stone and provide opportunities to find employment, maximize earnings, learn the host country language, and access services such as education and healthcare. Location decisions therefore not only affect immigrants themselves; they also shape immigrants’ contributions to the local economy and society. This project seeks to develop and test data-driven matching tools (called GeoMatch) for location decision-makers—both governments and immigrants themselves—that generate personalized location recommendations, leveraging insights from historical data and human-centered AI. The goal is to advance both the theoretical and empirical frontiers of algorithmic matching for newcomers. On the theoretical front, our interdisciplinary team of faculty experts will tackle problems at the intersection of estimation and prediction, algorithms and mechanism design, human-AI interaction, and immigrant integration. On the empirical front, we plan to conduct pilot tests via randomized controlled trials on the use of GeoMatch in collaboration with partners.
NAME | ROLE | SCHOOL | DEPARTMENTS |
---|---|---|---|
Jens Hainmueller | Main PI | Humanities and Sciences | Political Science |
Avidit Acharya | Co-PI | Humanities and Sciences | Political Science |
Yonatan Gur | Co-PI | Graduate School of Business | Graduate School of Business |
Tomas Jimenez | Co-PI | Humanities and Sciences | Sociology |
Dominik Rothenhaeusler | Co-PI | Humanities and Sciences | Statistics |
Back to top of page |
Dendritic Computation for Knowledge Systems
Artificial Intelligence (AI) now advances by multiplying twice as many floating-point numbers every two months, but the semiconductor industry tiles twice as many digital multipliers on a chip every two years. Consequently, users must access advanced AI through the cloud, which houses tens of thousands of chips and consumes about 20 megawatts of electricity, enough to power 16,000 homes. We aim to exchange digital multipliers tiled in 2-D for dendrite-like nanodevices integrated in 3-D by moving away from learning with synapses to learning with dendrites. This dendrocentric reconception of the brain promises datacenter performance for smartphone energy-budget. That would reign in AI’s unsustainable energy, carbon, and monetary costs, distribute its productivity gains equitably, transform its users’ experience, and restore their privacy.
NAME | ROLE | SCHOOL | DEPARTMENTS |
---|---|---|---|
Kwabena Boahen | Co-PI | Engineering | Bioengineering |
Scott W Linderman | Co-PI | Humanities and Sciences | Statistics |
H.-S. Philip Wong | Co-PI | Engineering | Electrical Engineering |
Matei Zaharia | Co-PI | Engineering | Computer Science |
Back to top of page |