Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Seed Research Grants | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
researchGrant

Seed Research Grants

Status
Closed
Date
Applications closed on September 15, 2025
Apply
Overview
2024 Recipients
2023 Recipients
2022 Recipients
2021 Recipients
2020 Recipients
2019 Recipients
2018 Recipients
Overview
2024 Recipients
2023 Recipients
2022 Recipients
2021 Recipients
2020 Recipients
2019 Recipients
2018 Recipients
Share
Link copied to clipboard!
Related
  • Stanford HAI Funds Groundbreaking AI Research Projects
    Nikki Goth Itoi
    Quick ReadJan 30
    news
    collage

    Thirty-two interdisciplinary teams will receive $2.37 million in Seed Research Grants to work toward initial results on ambitious proposals.

  • Policy-Shaped Prediction: Avoiding Distractions in Model-Based Reinforcement Learning
    Nicholas Haber, Miles Huston, Isaac Kauvar
    Dec 13
    Research
    Your browser does not support the video tag.

    Model-based reinforcement learning (MBRL) is a promising route to sampleefficient policy optimization. However, a known vulnerability of reconstructionbased MBRL consists of scenarios in which detailed aspects of the world are highly predictable, but irrelevant to learning a good policy. Such scenarios can lead the model to exhaust its capacity on meaningless content, at the cost of neglecting important environment dynamics. While existing approaches attempt to solve this problem, we highlight its continuing impact on leading MBRL methods —including DreamerV3 and DreamerPro — with a novel environment where background distractions are intricate, predictable, and useless for planning future actions. To address this challenge we develop a method for focusing the capacity of the world model through synergy of a pretrained segmentation model, a task-aware reconstruction loss, and adversarial learning. Our method outperforms a variety of other approaches designed to reduce the impact of distractors, and is an advance towards robust model-based reinforcement learning.

  • LABOR-LLM: Language-Based Occupational Representations with Large Language Models
    Susan Athey, Herman Brunborg, Tianyu Du, Ayush Kanodia, Keyon Vafa
    Dec 11
    Research
    Your browser does not support the video tag.

    Vafa et al. (2024) introduced a transformer-based econometric model, CAREER, that predicts a worker’s next job as a function of career history (an “occupation model”). CAREER was initially estimated (“pre-trained”) using a large, unrepresentative resume dataset, which served as a “foundation model,” and parameter estimation was continued (“fine-tuned”) using data from a representative survey. CAREER had better predictive performance than benchmarks. This paper considers an alternative where the resume-based foundation model is replaced by a large language model (LLM). We convert tabular data from the survey into text files that resemble resumes and fine-tune the LLMs using these text files with the objective to predict the next token (word). The resulting fine-tuned LLM is used as an input to an occupation model. Its predictive performance surpasses all prior models. We demonstrate the value of fine-tuning and further show that by adding more career data from a different population, fine-tuning smaller LLMs surpasses the performance of fine-tuning larger models.

  • How Persuasive Is AI-generated Propaganda?
    Josh A. Goldstein, Jason Chao, Shelby Grossman, Alex Stamos, Michael Tomz
    Feb 20
    Research

    Can large language models, a form of artificial intelligence (AI), generate persuasive propaganda? We conducted a preregistered survey experiment of US respondents to investigate the persuasiveness of news articles written by foreign propagandists compared to content generated by GPT-3 davinci (a large language model). We found that GPT-3 can create highly persuasive text as measured by participants’ agreement with propaganda theses. We further investigated whether a person fluent in English could improve propaganda persuasiveness. Editing the prompt fed to GPT-3 and/or curating GPT-3’s output made GPT-3 even more persuasive, and, under certain conditions, as persuasive as the original propaganda. Our findings suggest that propagandists could use AI to create convincing content with limited effort.

  • Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising
    Michelle Lam, Ayush Pandit, Colin H. Kalicki, Rachit Gupta, Poonam Sahoo, Danaë Metaxa
    Oct 04
    Research
    Your browser does not support the video tag.

    Algorithm audits are powerful tools for studying black-box systems without direct knowledge of their inner workings. While very effective in examining technical components, the method stops short of a sociotechnical frame, which would also consider users themselves as an integral and dynamic part of the system. Addressing this limitation, we propose the concept of sociotechnical auditing: auditing methods that evaluate algorithmic systems at the sociotechnical level, focusing on the interplay between algorithms and users as each impacts the other. Just as algorithm audits probe an algorithm with varied inputs and observe outputs, a sociotechnical audit (STA) additionally probes users, exposing them to different algorithmic behavior and measuring their resulting attitudes and behaviors. As an example of this method, we develop Intervenr, a platform for conducting browser-based, longitudinal sociotechnical audits with consenting, compensated participants. Intervenr investigates the algorithmic content users encounter online, and also coordinates systematic client-side interventions to understand how users change in response. As a case study, we deploy Intervenr in a two-week sociotechnical audit of online advertising (N = 244) to investigate the central premise that personalized ad targeting is more effective on users. In the first week, we observe and collect all browser ads delivered to users, and in the second, we deploy an ablation-style intervention that disrupts normal targeting by randomly pairing participants and swapping all their ads. We collect user-oriented metrics (self-reported ad interest and feeling of representation) and advertiser-oriented metrics (ad views, clicks, and recognition) throughout, along with a total of over 500,000 ads. Our STA finds that targeted ads indeed perform better with users, but also that users begin to acclimate to different ads in only a week, casting doubt on the primacy of personalized ad targeting given the impact of repeated exposure. In comparison with other evaluation methods that only study technical components, or only experiment on users, sociotechnical audits evaluate sociotechnical systems through the interplay of their technical and human components.

  • How Culture Shapes What People Want From AI
    Chunchen Xu, Xiao Ge, Daigo Misaki, Hazel Markus, Jeanne Tsai
    May 11
    Research
    Your browser does not support the video tag.

    There is an urgent need to incorporate the perspectives of culturally diverse groups into AI developments. We present a novel conceptual framework for research that aims to expand, reimagine, and reground mainstream visions of AI using independent and interdependent cultural models of the self and the environment. Two survey studies support this framework and provide preliminary evidence that people apply their cultural models when imagining their ideal AI. Compared with European American respondents, Chinese respondents viewed it as less important to control AI and more important to connect with AI, and were more likely to prefer AI with capacities to influence. Reflecting both cultural models, findings from African American respondents resembled both European American and Chinese respondents. We discuss study limitations and future directions and highlight the need to develop culturally responsive and relevant AI to serve a broader segment of the world population.

  • Minority-group incubators and majority-group reservoirs for promoting the diffusion of climate change and public health adaptations
    Matthew Adam Turner, Alyson L Singleton, Mallory J Harris, Cesar Augusto Lopez, Ian Harryman, Ronan Forde Arthur, Caroline Muraida, James Holland Jones
    Jan 01
    Research
    Your browser does not support the video tag.

    Current theory suggests that heterogeneous metapopulation structures can help foster the diffusion of innovations to solve pressing issues including climate change adaptation and promoting public health. In this paper, we develop an agent-based model of the spread of adaptations in simulated populations with minority-majority metapopulation structure, where subpopulations have different preferences for social interactions (i.e., homophily) and, consequently, learn deferentially from their own group. In our simulations, minority-majority-structured populations with moderate degrees of in-group preference better spread and maintained an adaptation compared to populations with more equal-sized groups and weak homophily. Minority groups act as incubators for novel adaptations, while majority groups act as reservoirs for the adaptation once it has spread widely. This suggests that population structure with in-group preference could promote the maintenance of novel adaptations.

  • Interaction of a Buoyant Plume with a Turbulent Canopy Mixing Layer
    Hayoon Chung, Jeffrey R Koseff
    Jun 23
    Research
    Your browser does not support the video tag.

    This study aims to understand the impact of instabilities and turbulence arising from canopy mixing layers on wind-driven wildfire spread. Using an experimental flume (water) setup with model vegetation canopy and thermally buoyant plumes, we study the influence of canopy-induced shear and turbulence on the behavior of buoyant plume trajectories. Using the length of the canopy upstream of the plume source to vary the strength of the canopy turbulence, we observed behaviors of the plume trajectory under varying turbulence yet constant cross-flow conditions. Results indicate that increasing canopy turbulence corresponds to increased strength of vertical oscillatory motion and variability in the plume trajectory/position. Furthermore, we find that the canopy coherent structures characterized at the plume source set the intensity and frequency at which the plume oscillates. These perturbations then move longitudinally along the length of the plume at the speed of the free stream velocity. However, the buoyancy developed by the plume can resist this impact of the canopy structures. Due to these competing effects, the oscillatory behavior of plumes in canopy systems is observed more significantly in systems where the canopy turbulence is dominant. These effects also have an influence on the mixing and entrainment of the plumes. We offer scaling analyses to find flow regimes in which canopy induced turbulence would be relevant in plume dynamics.

  • Stanford AI Scholars Find Support for Innovation in a Time of Uncertainty
    Nikki Goth Itoi
    Jul 01
    news

    Stanford HAI offers critical resources for faculty and students to continue groundbreaking research across the vast AI landscape.

This project examines the concrete organizational roadblocks shaping the implementation of FATE (fairness, accountability, transparency, and ethics) values as the technology industry designs and implements AI systems. Few existing studies tie ethical issues around AI systems to how firms implement these systems at scale, and the frictions that firms encounter depending on the domain and procedure for implementation. Our project aims to fill these critical gaps by drawing on in-depth qualitative research to examine “AI accountability in practice,” focusing not only on the internal features of FATE models but also on the details of their uses, interpretations, and circulation across departments and companies. To do so, we rely on a compare-and-contrast approach, analyzing the current efforts to implement FATE values in the light of historical cases where industries such as aviation, healthcare, and corporate social responsibility sought to address comparable issues, with different outcomes. In parallel, we draw on interviews and content analysis to map the range of FATE strategies and documentations currently implemented in the technology sector. We complement this review with in-depth case studies of technology companies. Through this structured comparison of the main strategies developed to implement such values across domains and periods, we hope to document what is new – and what is not – in the organizational hurdles and constraints that technology firms face when addressing the challenges of AI systems in terms of fairness, accountability, transparency, and ethics.

Name

Role

School

Department

Riitta Katila

PI

School of Engineering

Management Science and Engineering

Angele Christin

Co-PI

School of Humanities and Sciences

Communication

As humans interact more with machines, they interact with multitudes of electronics and devices. Understanding and assessing the effect of the underlying materials on the human health is important. In addition, these machines made from materials and chemicals could adversely affect the environment both during their use and after the end of their life cycles. Estimation of the potential health risks (e.g., chronic toxicity that accelerates aging) posed by these environmental chemicals, materials, and drugs is challenging due to the large number of diversified chemicals and materials with generally uncharacterized exposures, mechanisms, and toxicities. This not only affects employees working in the manufacturing plants, it affects humans both indoors and outdoors as they work in an increasing anthropomorphic world. For example, less than 1% of the chemicals registered for commercial use in the US have undergone toxicity characterization. Environmental and human sustainability in the face of human-machine advancement is a major challenge.

The goal of this project is to build an in-silico modeling framework for estimating toxicity of new chemicals that are incorporated in products and to use this framework for studying the chemical basis of toxicity, which is currently unknown. We plan to do this by using a hybrid methodology that combines deep neural networks, natural language processing, and structural chemical analysis.

If this effort is successful, it will impact and potentially transform toxicity assessment of drugs and chemicals in three ways. One, this novel framework, can provide scientists and regulators with an approach to keep up with the large number of diverse chemicals that are incorporated in everyday products for rapid toxicity assessment. Two, as the molecular features of toxicity are not well-understood, this idea will help create a new predictive framework that can link similar chemical structures with toxicity, thereby guiding experimental testing of each chemical in animal models. Three, this proposal, which aims to integrate multiple sources of toxicity data for over 100,000 compounds, including drugs, will provide unprecedented opportunities to study the toxicity of materials that exist in the environment and in products and its effects on human health.

Name

Role

School

Department

Sadasivan Shankar

PI

Department of Humanities and Sciences

Chemistry

Richard Zare

Co-PI

 

SLAC

Our research brings the tools of AI to bear on a pressing challenge in today’s world – how to build a stronger, more just democracy that embraces sustainable development. One mechanism for building a just and sustainable future is to re-imagine how we teach youth to become citizens. For many decades, investment in civics and history education has languished behind attention to science, technology, engineering, and math. The result is that we have little systematic knowledge to help us understand what students learn about becoming good citizens. However, AI technologies afford an opportunity to contribute to the greater good by revealing what students learn about becoming citizens. We will use Natural Language Processing (NLP) to provide a comprehensive assessment of the content of history and civics textbooks; documenting textbook depictions of diversity, equity, and sustainability; developing and testing arguments to explain why textbook content varies; and adapting existing NLP methods for the domain of textbook data. Understanding and changing textbook content is one important lever in the multi-faceted process required to redesign history and civics education. In recent years, issues of growing income inequality, persistent racial injustice, and increasingly devastating climate disasters have taken center stage in public discourse. As a result, there is a unique window of opportunity to create positive social change in history and civics education.

Name

Role

School

Department

Patricia Bromley

PI

Graduate School of Education

Graduate School of Education

Algorithmic impact assessments (AIAs) have emerged as a promising framework for auditing and regulating automated decision systems. In an AIA, the developer of an automated decision system studies the system’s effects on its users, including potential implications for fairness, justice, privacy, or bias. However, a major barrier to the adoption of AIAs is the lack of clarity on what empirical evaluations AIAs consist of and how practitioners should implement them. Our work seeks to alleviate this challenge by developing open source tools for scalably implementing AIAs and understanding how the computational principles underlying ML evaluation should inform regulatory and policy guidelines.

Name

Role

School

Department

Christopher Re

PI

School of Engineering

Computer Science

Daniel Ho

Co-PI

School of Law

School of Law

Name

Role

School

Department

Surya Ganguli

PI

School of Humanities and Sciences

Applied Physics

Mark Schnitzer

Co-PI

School of Humanities and Sciences

Biology and Applied Physics

How do we build artificial intelligence systems that reflect our values? Current algorithmic approaches rely upon the problematic assumption that there is a single consensus answer that AI systems should imitate. In social computing systems such as Facebook, Wikipedia, and Twitter as well as classification tasks ranging from content moderation to hate speech detection to misinformation detection (Borkan et al. 2019; Zhou and Zafarani 2018), there often exists fundamental disagreements between majority and minority groups on what the correct labels ought to be. Unfortunately, the ground truth labels used to train these AIs are typically determined by a majority vote of a small handful of labelers (Muller et al. 2021), often resulting in the perspective of the most numerous group overriding other groups in determining the ground truth label for training and evaluation. The resulting AIs appear to have excellent performance on held-out test sets (Gordon et al. 2021), and then launch to the public—where they fail marginalized groups. Our project asks: how might we rearchitect AI systems for situations where irreconcilable disagreements between groups make it difficult to define a ‘ground truth’ label?

Name

Role

School

Department

Tatsunori Hashimoto

PI

School of Engineering

Computer Science

Michael Bernstein

Co-PI

School of Engineering

Computer Science

Jeffrey Hancock

Co-PI

School of Humanities and Sciences

Communication

Vaccination is fundamental for ending the current coronavirus pandemic. However, vaccines are limited in most places around the world, and, with asymptomatic cases suspected to be a significant proportion of the population (Vogl T, Leviatan S, Segal E, 2021), it is broadly unknown who is protected and who is not. Important questions also remain regarding the true spread of the pandemic and the correlation of an antibody response to risk of reinfection, COVID-19 disease severity, and to other diseases. Fortunately, significant amounts of antibody against SARS-CoV-2 may persist for several weeks to months after infection (Grossberg et al. 2021) or vaccination, which allows for detection even after the virus is no longer detected by nucleic acid or antigen tests. Utilizing the power of a simple, instrument-free assay, machine learning and a cell phone - we propose to build a simple cellphone-based tool for microagglutination analysis that will allow anyone in the world to self-measure their antibody status.

Name

Role

School

Department

Manu Prakash

PI

School of Engineering

Bioengineering

Peter Kim

Co-PI

School of Medicine

Biochemistry

A major policy concern with the advent of Big Data and Artificial Intelligence is that firms in online markets with access to large amounts of sensitive information about individual consumers and with sophisticated machine learning tools may exploit this advantage to price discriminate against individual consumers by charging higher prices to consumers who are known to have a higher demand for a particular product. Concerns about negative ways in which consumers might be harmed from exploits of their personal data have led to congressional hearings and state and national policy proposals both in the U.S. and internationally. Still, little is known about how such policies should be designed to best protect consumers without hampering the progress of AI technologies that might actually benefit consumers. Our project aims to provide guidance on this front. We will use AI tools to estimate our model using a large database of proprietary consumer data and we will study a model of how the use of these tools by large firms will affect humans broadly speaking. Our key preliminary finding: AI uses of personal data can help consumers where there is sufficient competition between firms and when all firms have similar levels of access to consumers' information. Outside of these boundaries, consumers may be harmed. These preliminary findings suggest that regulators should pay particular attention to protecting competition in AI-intense consumer retail marketplaces, such as e-commerce.

Name

Role

School

Department

Patrick Kehoe

PI

School of Humanities and Sciences

Economics

Brad Larsen

Co-PI

School of Humanities and Sciences

Economics

Elena Pastorino

Co-PI

Hoover Institution, Stanford University & Stanford Institute of Economic Policy Research (SIEPR)

Dean of Research

HUMAIN, the Humanities AI Network, is a Collaboratory that is dedicated to problem-solving through humanistic research using AI methods. Our focus is ‘The Uncertainty of Being(s) Recorded’. We’ll be hosting two major workshops in 2022 that will tackle ‘Lost and Found: Discoverability and Surveillance’, and ‘Bodies of Records: the Aesthetics of Archives and AI’. Involving faculty, research staff, graduates and undergraduates, our collaboration tackles the most urgent problems in how data—from the earliest days of the human record into the future of predictive processes of recording—both reflect and contribute to human experience and effort. Our concerns involve ways in which the record shapes a version of reality; how it oppresses, privileges, and gives voice to dominant groups in society. Whose reality is represented in the data? How are subjects represented? And who reads, watches, and secures that data into the future?

Name

Role

School

Department

Elaine Treharne

PI

School of Humanities and Sciences

English

Mark Algee-Hewitt

Co-PI

School of Humanities and Sciences

English

Anna Bigelow

Co-PI

School of Humanities and Sciences

Religious Studies

Angele Christin

Co-PI

School of Humanities and Sciences

Communication

Shane Denson

Co-PI

School of Humanities and Sciences

Art and Art History

Stephen Monismith

Co-PI

School of Engineering

Civil and Environmental Engineering

Ge Wang

Co-PI

School of Humanities and Sciences

Music

Humans understand the world in terms of cause and effect. In our everyday lives, we experience ourselves as agents affecting the world by physically interacting with it. Our haptic sense is a primary source of how we learn about the causal structure of the world. However, very little research has looked at how haptic information shapes causal perception in concert with information from other sensory modalities. We believe that in order to acquire a full human-like understanding of causation---to be truly in touch with causation---AI agents need to be grounded in the physical world. In this project, we will combine novel psychophysical experiments with computational modeling to investigate what role haptic experience plays in causal perception and inference. Our proposed work will develop a computational model of how humans combine haptic evidence with evidence from other sense modalities to make causal inferences. This model lays a foundation for human-inspired learning techniques in robotic systems.

Name

Role

School

Department

Sean Follmer

PI

School of Engineering

Mechanical Engineering

Jeannette Bohg

Co-PI

School of Engineering

Computer Science

Tobias Gerstenberg

Co-PI

School of Humanities and Sciences

Psychology

Online advertising is pervasive and powerful, but remains understudied from the perspective of users. Unlike other forms of media, users have limited control over the ads they are shown, can be targeted based on potentially inaccurate or insensitive inferred attributes, and are consciously and unconsciously prompted to change their beliefs and behaviors by ads. Our team will build In(advert)ent, the first user-centered system to study race and gender biases in online advertising. Our system will allow us to understand the lived experiences of real internet users as they encounter repeated exposures to numerous independent, personalized ad delivery platforms that follow them across the web. With this user-centered, cross-platform, in-the-wild approach, we will observationally measure race and gender disparities in the content and audience of ads, and also experiment with interventions to change ad landscapes and measure their effect on users’ behaviors and beliefs.

Name

Role

School

Department

Jeffrey Hancock

PI

School of Humanities and Sciences

Communication

James Landay

Co-PI

School of Engineering

Computer Science

The growing need for ambulatory patient monitoring in heart failure management is often limited by the unpredictability of cardiovascular events, the intermittent nature of current clinical practices (including physical exams and/or medical imaging) and the variable clinical significance of recorded data in patients. Technological advances in sensing, miniaturization and wireless communication support the introduction of implantable physiological sensors that can greatly enhance the monitoring of cardiac patients to detect abnormalities and predict adverse events. Through this project, we propose the implementation of analytical methods to improve the accuracy and actionability of biosignals collected by minimally invasive sensor technology. Specifically, we leverage artificial intelligence and machine learning to extract highly-actionable insights on cardiac health from device-based clinical data, in order to inform treatment strategies and medical decision-making. Coupled with seamless connectivity (IoT), preserved anonymity and interoperability of data across stakeholders, this approach can ultimately realize real-time detection of cardiac health and adjustment of therapy.

Name

Role

School

Department

William Hiesinger

PI

School of Engineering

Mechanical Engineering

Mark Cutkosky

Co-PI

School of Medicine

Cardiothoracic Surgery

Heart disease is the leading cause of death world-wide, with replacement of the heart with a donor heart as the sole option available upon end-stage heart failure. An attractive alternative is transplantation of heart muscle cells, cardiomyocytes, derived from the patient’s own cells. This therapeutic strategy bypasses immunogenicity and the associated high risk of transplant rejection. While it holds great promise, there is still much to understand about the genetic and molecular basis of cardiomyocyte cell biology to ensure that regeneration, repair, and precision integration of cardiomyocytes into heart tissue is safe and effective. It requires an unprecedented level of research detail: for one, distinct and long species of RNA originating from the same gene, so-called isoforms, underlie the cell biology of cardiomyocytes. In fact, the two largest genes found in the human heart, DMD, coding for the protein dystrophin, and TTN, coding for titin, measure a stunning 2.3 mega base (Mb) and 0.3 Mb, respectively. For the former, that is close to one millimeter of DNA. The quantitative measurement of the complex RNA species that originate from these and other genes is painstakingly difficult. We aim to iteratively improve the design of molecular identifiers, short DNA sequences that allow us to trace tens of thousands of single molecules of RNA back to their origin: we use the human ability to recognize patterns in noisy environments to distill machine learned features that are translatable to the experimental setup. Separable, traceable identifiers allow us to measure long RNA species in single cardiomyocytes at the scale of the human heart. We anticipate that insights generated through these measurements will increase the precision and accuracy with which we can reprogram heart muscle cells, catalyzing the development of therapeutic strategies for heart disease.

Name

Role

School

Department

Lars Steinmetz

PI

School of Medicine

Genetics

Tsachy Weissman

PI

School of Engineering

Electrical Engineering

Traditional discrete signal representations are fundamentally incompatible with many emerging AI techniques that require continuous, differentiable representations. In this project, we explore AI models using implicit representations with broad applications in robotics, 3D vision, medical imaging, graphics, and interactive design. These representations model multi-dimensional signals as multilayer perceptrons and offer continuous, differentiable signal representations that are suitable for end-to-end optimized application-domain-specific task performance. For example, in robotics applications this includes decision making based on a policy network jointly learned with the inferred scene representation. A key question for this project is how to learn distributions of functions that represent classes of signals, with the goal of allowing AI to reason about this function space to represent, generate, interpolate, edit, and compress signals in the aforementioned applications.

Name

Role

School

Department

Gordon Wetzstein

PI

Electrical Engineering

School of Engineering

Jeannette Bohg

Co-PI

Computer Science

School of Engineering

Jiajun Wu

Co-PI

Computer Science

School of Engineering

Machine learning offers great promise to narrow healthcare gaps and provide high quality care in America and across the globe. Its success is predicated on the ability to collect and utilize large volumes of data, leading to an unsurprising rapid growth in its volume, velocity, and voracity. However, more data does not necessarily mean better data. There are significant gaps in our data, as well as over- and under-representation of conditions or entire populations, presenting us with several problems: how do we currently control for missing information, how do we identify and then fill the gaps this missing data presents, and finally, how do we plan for the incorporation of data we do not yet measure? We propose a consensus conference of experts in AI, participatory research, and design to share their knowledge and set an agenda to address these complex and critically important issues related to missing data in data sets used for machine learning.

Name

Role

School

Department

Christian Rose

PI

School of Medicine

Emergency Medicine

Italo Brown

Co-PI

School of Medicine

Emergency Medicine

Michael Gisondi

Co-PI

School of Medicine

Emergency Medicine

Despite substantial progress, science still lacks a firm understanding of brain functions such as memory, cognition, and movement control. It’s clear that these functions are performed by the coordinated activity of millions of neurons in large networks, but the computations that underlie these networks remain elusive. Advances in computing have led to the development of sophisticated artificial neural networks (ANNs), which are loosely modeled after biological neuronal networks (BNNs). While the underlying structure and individual components differ between ANNs and BNNs, they share some emergent network properties and both can solve complex computational problems. This project will assess the compatibility of ANNs and BNNs by attempting to integrate them to share information with the aim to support function. If successful, this proof-of-concept study will demonstrate the feasibility of ANNs to serve as an external support system for BNN function, representing an important breakthrough for next-generation implanted medical devices that treat brain disease.

Name

Role

School

Department

Paul Nuyujukian

PI

School of Medicine

Bioengineering and Neurosurgery

Stephen Clarke

 

School of Engineering

Bioengineering

Only net-zero emission can stop global warming. Reaching net zero will be virtually impossible without carbon capture utilization & sequestration (CCUS). While the concept of subsurface CO2 sequestration is widely developed, actually operations remain limited to 21 commercially operating sites worldwide (in 2020). As the world starts pricing carbon, commercial sequestration will become increasingly economically viable. However, this does not solve the challenging technological aspect of storing CO2 in the subsurface, the speed at which this needs to be done, and the vast geographic diversity over which this needs to be achieved. Decisions will need to be made on what sites to select; decision on what data to acquire to make a final assessment of CO2 storage potential, then decisions on how to optimally operate such systems, such as deciding on well location and rates, monitoring or what additional infrastructure will need to be built to connect the CO2 source. Our project will develop a state-of-the-art Intelligent Agent by formulating sequential decision problems of this nature as Partially Observable Markov Decision Processes and developing practical solution methods in a real-world setting that can address the speed and urgency of needing to store CO2 in the subsurface.

Name

Role

School

Department

Jef Caers

PI

School of Earth, Energy and Environmental Sciences

Geological Sciences

Sally Benson

Co-PI

School of Earth, Energy and Environmental Sciences

Energy Resources Engineering

Mykel Kochenderfer

Co-PI

School of Engineering

Aeronautics and Astronautics

Tapan Mukerji

Co-PI

School of Earth, Energy and Environmental Sciences

Energy Resources Engineering

Increasingly, workers must rely on labor markets' rating systems, algorithmic systems that label workers with a one-to-five star rating. These AI-based systems are the technological bedrock of the platforms, and influence everything from how work is allocated, to the wages that workers can command, to the career paths that are open to them. These systems, however, are backwards-looking, only showcasing past behavior: workers who have no or few prior projects struggle to gain attention, and workers who wish to grow their career by acquiring new skills struggle to build a reputation in the new area. Our project seeks to develop a digital resumé for workers in online labor platforms that is forward-looking, drawing on market data to highlight how the worker's experiences prepare them for the next job they are interested in. This work brings together statistical methodology, algorithm design, human-computer interaction, and political economy to enable the same career pathways for online workers that we take for granted in offline work. We will introduce a novel approach based on empirical Bayesian estimation that "borrows" knowledge from other workers' past experiences, and avoids penalizing newcomers for a lack of prior work experience. We will engage in a participatory design process with online IT workers to build and publicly launch our new digital resumé platform for workers to use when applying to jobs on online work platforms.

Name

Role

School

Department

Michael Bernstein

PI

School of Engineering

Computer Science

Ramesh Johari

Co-PI

School of Engineering

Management Science and Engineering

Margaret Levi

Co-PI

School of Humanities and Sciences

Political Science

The past decade has seen the dramatic success of deep convolutional neural networks for computer vision. Classifiers trained on large-scale image and video datasets (e.g., ImageNet) provide both at or above human-level performance and the generic visual representations that enable transfer learning to other visual classification tasks. These innovations expand our ability to obtain rich, versatile visual representations of naturalistic image data and to parse the semantics of the human visual world. Increasingly, however, much of human visual experience is mediated through computer, tablet, and smartphone screens. To better understand, model, and meaningfully contribute to this important segment of human experience --- life lived on and through screens --- we need rich and versatile visual representations of the digital environments encountered in everyday lives. With those representations, we can map how humans navigate, learn from, and contribute to digital life. We propose to use newly available data and analyses grounded in media psychology to develop visual representations that facilitate semantic labeling of human screen-based behavior and support development of new computational models of human learning and curiosity.

Name

Role

School

Department

Nick Haber

PI

Graduate School of Education

Graduate School of Education

Nilam Ram 

Co-PI

School of Humanities and Sciences

Communication and Psychology

Byron Reeves

Co-PI

School of Humanities and Sciences

Communication

Thomas Robinson

Co-PI

School of Medicine

Child Health

Modern AI systems have seen success in a wide variety of tasks such as image recognition, natural language processing and game playing, which has enabled their use in safety critical settings such as transportation and healthcare. Unfortunately, analyses of these systems have demonstrated the ways in which AI systems can be unsafe, including sensitivity to small changes in their input or reliance on protected features such as race and gender to make decisions. The HAI seed grant will be used to fund a postdoc position at the Stanford Center for AI Safety, which is dedicated to the understanding and mitigation of risks associated with AI systems. The center sits at the intersection between industry applications, technical AI research, and government regulation. The recipient of the fellowship will develop a strategic research vision that addresses large-scale AI safety challenges by connecting their work with other researchers at the center, at Stanford more broadly, and in industry. We expect this fellowship to jump-start a tradition of postdoctoral fellowships at the center, which will ultimately help address the biggest challenges in AI safety.

Name

Role

School

Department

Clark Barrett

PI

School of Engineering

Computer Science

Mykel Kochenderfer

Co-PI

School of Engineering

Aeronautics and Astronautics

Dorsa Sadigh

Co-PI

School of Engineering

Computer Science/Electrical Engineering

Some of the most impressive achievements of the human mind – including the contributions of mathematicians, physicists, and computer scientists – have led to deep scientific understanding and powerful technologies. We seek to combine human and machine learning research to build toward artificially intelligent systems that could potentially achieve these abilities. Our goals are guided by the tenet that scientific problem solving skills rely on a combination of intuition and systematic reasoning, and that learning such problem solving skills occurs through both explicit instruction and explanatory discourse. Unlike typical machine learning systems that learn from explicit instruction (i.e. input-output pairs), our initial research will focus on building an explanation-based meta-learning framework that allows machines to quickly learn new formal reasoning skills with a combination of both instruction and explanation. If successful, the proposed initial research could both allow us to further understand human cognition, as well as help address the underspecification problem in machine learning research.

Name

Role

School

Department

Chelsea Finn

PI

School of Engineering

Computer Science and Electrical Engineering

Jay McClelland

Co-PI

School of Humanities and Sciences

Psychology

Magnetic resonance imaging (MRI)-integrated radiotherapy has the potential to improve the treatment of cancer patients through the delivery of precise, high-dose radiation.  Imaging guidance provided by MRI during radiation treatment can facilitate delivering higher doses to cancer cells that would not be possible with traditional techniques due to the risk of toxicity to surrounding organs. The specific need of supporting daily adaptive treatment planning and live decision-making during radiotherapy, however, diverges significantly from diagnostic MRI and imposes more stringent limits on the imaging time. The goal of this work is to explore the wealthy individualized priors from radiotherapy patients, who receive multiple imaging and treatment sessions throughout the radiotherapy course and combine such priors with domain-specific knowledge of MRI physics in a deep learning framework, to allow dramatic subsampling during image acquisition while still supporting adaptive radiotherapy with sufficient anatomical and biological information. Specifically, we aim at constructing deep learning models to address two challenges in the current MRI-guided radiotherapy practice: 1) real time 3D tumor tracking during radiotherapy delivery, where model-based 3D MRI will be obtained within sub-seconds and used to update tumor position in real time for accurate radiation beam placement; 2) daily quantitative imaging for treatment response monitoring, where a series of model-based MR images will be obtained with a clinically acceptable scan time and used to support biological parameter mapping on a daily basis, from which biomarkers that are predictive of treatment response can be extracted and used to guide radiation dose adjustment.

Name

Role

School

Department

Lucas Vitzthum

PI

School of Medicine

Radiation Oncology - Radiation Therapy

Daniel Chang

Co-PI

School of Medicine

Radiation Oncology - Radiation Therapy

Lianli Liu

Co-PI

School of Medicine

Radiation Oncology - Radiation Physics

John Pauly

Co-PI

School of Engineering

Electrical Engineering

Lei Xing

Co-PI

School of Medicine

Radiation Oncology - Radiation Physics

Humans engage in diverse daily activities in complex environments. Most research in cognitive science, however, has focused on simple, isolated tasks in well-controlled laboratory settings. Thus, despite the advances in understanding human thinking and reasoning, these findings are not always easily generalizable to actual human behaviors outside the lab. Such disconnect also has limited the direct relevance of cognitive science research on the development of AI, particularly where translating cognition to behaviors is critical (e.g., robotics, embodied AI). In this project, we aim to address these issues by developing an embodied, interactive virtual reality (EI-VR) platform, accompanied by a curriculum for virtual embodiment learning, to study real-world human cognition and behavior. The new platform will allow us to achieve both experimental control and ecological validity; such studies, in turn, can provide valuable data for driving new advances in AI.

Name

Role

School

Department

Jiajun Wu

PI

School of Engineering

Computer Science

Hyowon Gweon

Co-PI

School of Humanities and Sciences

Psychology

Nick Haber

Co-PI

Graduate School of Education

Graduate School of Education

Attention-Deficit/Hyperactivity Disorder (ADHD) is a highly prevalent neurodevelopmental disorder. Despite prior efforts towards validation of objective diagnostics for ADHD, clinical assessments centered on rating scales remain the primary diagnostic method. Prior work has shown that the functioning of the Autonomic Nervous System (ANS) is atypical in many people with ADHD. In this study, we examine the use of longitudinal measurement of Electrodermal Activity (EDA) —a peripheral index of autonomic arousal— as a biomarker to 1) aid in the objective diagnosis of ADHD  and 2) augment the diagnosis of ADHD by identifying potential subtypes, or biotypes. We plan to conduct a 10 day study to understand the EDA characteristics in individuals with a rating scale diagnosis of ADHD, using a wearable sensor complemented with other behavioral data such as medication usage and physical activity. The study will yield a rich dataset for computational analysis to investigate the robustness of EDA features in characterizing ADHD and its biotypes and to detect atypical EDA activity.

Name

Role

School

Department

Leanne Williams

PI

School of Medicine

Psychiatry and Behavioral Sciences

James Landay

Co-PI

School of Engineering

Computer Science

John Leikauf

Co-PI

School of Medicine

Psychiatry and Behavioral Sciences - Child and Adolescent Psychiatry

"Human-centeredness" is a deceptively simple idea. Humans within a society and across the globe differ in their values, beliefs, and norms. To be human-centered is to deeply understand the representation of the tendencies and needs of people from different cultural backgrounds. Nonetheless, current Artificial Intelligence (AI) development reflects and encodes a powerful universalistic fallacy—that humans in all contexts think and feel and act in similar ways. Dominant visions about AI development in the US have been based on cultural views of a distinctive sample that is disproportionately white, male, and US-centered. The lack of diverse representations of cultural views is likely to result in biased technological development. Designers in different societies lack proper tools to build empathy for and learn from AI development in other cultures. The lack of common understanding of cultural influences on AI development and implications can cause confusion and tension among the public, and hinder cross-cultural collaboration on AI development and ethics. In this proposal, we seek to substantiate the concept of human-centered artificial intelligence by developing a conceptual framework based on cultural views on the environment and on the self to inform the design of AI. As a first step, we will compare cultures in the US and East Asia (China and Japan in particular) in relation to the design of Ambient Intelligence (AmI). We will conduct culture cycle analysis to identify and distill prominent cultural factors that are most relevant to the design of AI at various levels. We will generate futuristic scenarios about Ambient Intelligence (AmI) in accordance with people's worldviews and self-construals, based on which we will conduct empirical studies. Ultimately, we hope to enable equitable, culturally resonant technological development and forge collaborations on AI development and ethics across the globe.

Name

Role

School

Department

Hazel Markus

PI

School of Humanities and Sciences

Psychology

Brian Lowery

Co-PI

Graduate School of Business

Graduate School of Business