Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Seed Research Grants | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
researchGrant

Seed Research Grants

Status
Closed
Date
Applications closed on September 15, 2025
Apply
Overview
2024 Recipients
2023 Recipients
2022 Recipients
2021 Recipients
2020 Recipients
2019 Recipients
2018 Recipients

The seed research grants are designed to support new, ambitious, and speculative ideas with the objective of getting initial results.

In keeping with the multidisciplinary mission of HAI, we welcome proposals from the whole array of humanistic, social scientific, natural scientific, biomedical, and engineering approaches, including critical, historical, ethnographic, clinical, experimental, and inventive work. We fund a wide variety of projects, from discrete studies to book-length research to speaker series to system building and evaluation.

Applications closed on September 15, 2025.

Overview
2024 Recipients
2023 Recipients
2022 Recipients
2021 Recipients
2020 Recipients
2019 Recipients
2018 Recipients
Share
Link copied to clipboard!
Related
  • Stanford HAI Funds Groundbreaking AI Research Projects
    Nikki Goth Itoi
    Quick ReadJan 30
    news
    collage

    Thirty-two interdisciplinary teams will receive $2.37 million in Seed Research Grants to work toward initial results on ambitious proposals.

  • Policy-Shaped Prediction: Avoiding Distractions in Model-Based Reinforcement Learning
    Nicholas Haber, Miles Huston, Isaac Kauvar
    Dec 13
    Research
    Your browser does not support the video tag.

    Model-based reinforcement learning (MBRL) is a promising route to sampleefficient policy optimization. However, a known vulnerability of reconstructionbased MBRL consists of scenarios in which detailed aspects of the world are highly predictable, but irrelevant to learning a good policy. Such scenarios can lead the model to exhaust its capacity on meaningless content, at the cost of neglecting important environment dynamics. While existing approaches attempt to solve this problem, we highlight its continuing impact on leading MBRL methods —including DreamerV3 and DreamerPro — with a novel environment where background distractions are intricate, predictable, and useless for planning future actions. To address this challenge we develop a method for focusing the capacity of the world model through synergy of a pretrained segmentation model, a task-aware reconstruction loss, and adversarial learning. Our method outperforms a variety of other approaches designed to reduce the impact of distractors, and is an advance towards robust model-based reinforcement learning.

  • LABOR-LLM: Language-Based Occupational Representations with Large Language Models
    Susan Athey, Herman Brunborg, Tianyu Du, Ayush Kanodia, Keyon Vafa
    Dec 11
    Research
    Your browser does not support the video tag.

    Vafa et al. (2024) introduced a transformer-based econometric model, CAREER, that predicts a worker’s next job as a function of career history (an “occupation model”). CAREER was initially estimated (“pre-trained”) using a large, unrepresentative resume dataset, which served as a “foundation model,” and parameter estimation was continued (“fine-tuned”) using data from a representative survey. CAREER had better predictive performance than benchmarks. This paper considers an alternative where the resume-based foundation model is replaced by a large language model (LLM). We convert tabular data from the survey into text files that resemble resumes and fine-tune the LLMs using these text files with the objective to predict the next token (word). The resulting fine-tuned LLM is used as an input to an occupation model. Its predictive performance surpasses all prior models. We demonstrate the value of fine-tuning and further show that by adding more career data from a different population, fine-tuning smaller LLMs surpasses the performance of fine-tuning larger models.

  • How Persuasive Is AI-generated Propaganda?
    Josh A. Goldstein, Jason Chao, Shelby Grossman, Alex Stamos, Michael Tomz
    Feb 20
    Research

    Can large language models, a form of artificial intelligence (AI), generate persuasive propaganda? We conducted a preregistered survey experiment of US respondents to investigate the persuasiveness of news articles written by foreign propagandists compared to content generated by GPT-3 davinci (a large language model). We found that GPT-3 can create highly persuasive text as measured by participants’ agreement with propaganda theses. We further investigated whether a person fluent in English could improve propaganda persuasiveness. Editing the prompt fed to GPT-3 and/or curating GPT-3’s output made GPT-3 even more persuasive, and, under certain conditions, as persuasive as the original propaganda. Our findings suggest that propagandists could use AI to create convincing content with limited effort.

  • Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising
    Michelle Lam, Ayush Pandit, Colin H. Kalicki, Rachit Gupta, Poonam Sahoo, Danaë Metaxa
    Oct 04
    Research
    Your browser does not support the video tag.

    Algorithm audits are powerful tools for studying black-box systems without direct knowledge of their inner workings. While very effective in examining technical components, the method stops short of a sociotechnical frame, which would also consider users themselves as an integral and dynamic part of the system. Addressing this limitation, we propose the concept of sociotechnical auditing: auditing methods that evaluate algorithmic systems at the sociotechnical level, focusing on the interplay between algorithms and users as each impacts the other. Just as algorithm audits probe an algorithm with varied inputs and observe outputs, a sociotechnical audit (STA) additionally probes users, exposing them to different algorithmic behavior and measuring their resulting attitudes and behaviors. As an example of this method, we develop Intervenr, a platform for conducting browser-based, longitudinal sociotechnical audits with consenting, compensated participants. Intervenr investigates the algorithmic content users encounter online, and also coordinates systematic client-side interventions to understand how users change in response. As a case study, we deploy Intervenr in a two-week sociotechnical audit of online advertising (N = 244) to investigate the central premise that personalized ad targeting is more effective on users. In the first week, we observe and collect all browser ads delivered to users, and in the second, we deploy an ablation-style intervention that disrupts normal targeting by randomly pairing participants and swapping all their ads. We collect user-oriented metrics (self-reported ad interest and feeling of representation) and advertiser-oriented metrics (ad views, clicks, and recognition) throughout, along with a total of over 500,000 ads. Our STA finds that targeted ads indeed perform better with users, but also that users begin to acclimate to different ads in only a week, casting doubt on the primacy of personalized ad targeting given the impact of repeated exposure. In comparison with other evaluation methods that only study technical components, or only experiment on users, sociotechnical audits evaluate sociotechnical systems through the interplay of their technical and human components.

  • How Culture Shapes What People Want From AI
    Chunchen Xu, Xiao Ge, Daigo Misaki, Hazel Markus, Jeanne Tsai
    May 11
    Research
    Your browser does not support the video tag.

    There is an urgent need to incorporate the perspectives of culturally diverse groups into AI developments. We present a novel conceptual framework for research that aims to expand, reimagine, and reground mainstream visions of AI using independent and interdependent cultural models of the self and the environment. Two survey studies support this framework and provide preliminary evidence that people apply their cultural models when imagining their ideal AI. Compared with European American respondents, Chinese respondents viewed it as less important to control AI and more important to connect with AI, and were more likely to prefer AI with capacities to influence. Reflecting both cultural models, findings from African American respondents resembled both European American and Chinese respondents. We discuss study limitations and future directions and highlight the need to develop culturally responsive and relevant AI to serve a broader segment of the world population.

  • Minority-group incubators and majority-group reservoirs for promoting the diffusion of climate change and public health adaptations
    Matthew Adam Turner, Alyson L Singleton, Mallory J Harris, Cesar Augusto Lopez, Ian Harryman, Ronan Forde Arthur, Caroline Muraida, James Holland Jones
    Jan 01
    Research
    Your browser does not support the video tag.

    Current theory suggests that heterogeneous metapopulation structures can help foster the diffusion of innovations to solve pressing issues including climate change adaptation and promoting public health. In this paper, we develop an agent-based model of the spread of adaptations in simulated populations with minority-majority metapopulation structure, where subpopulations have different preferences for social interactions (i.e., homophily) and, consequently, learn deferentially from their own group. In our simulations, minority-majority-structured populations with moderate degrees of in-group preference better spread and maintained an adaptation compared to populations with more equal-sized groups and weak homophily. Minority groups act as incubators for novel adaptations, while majority groups act as reservoirs for the adaptation once it has spread widely. This suggests that population structure with in-group preference could promote the maintenance of novel adaptations.

  • Interaction of a Buoyant Plume with a Turbulent Canopy Mixing Layer
    Hayoon Chung, Jeffrey R Koseff
    Jun 23
    Research
    Your browser does not support the video tag.

    This study aims to understand the impact of instabilities and turbulence arising from canopy mixing layers on wind-driven wildfire spread. Using an experimental flume (water) setup with model vegetation canopy and thermally buoyant plumes, we study the influence of canopy-induced shear and turbulence on the behavior of buoyant plume trajectories. Using the length of the canopy upstream of the plume source to vary the strength of the canopy turbulence, we observed behaviors of the plume trajectory under varying turbulence yet constant cross-flow conditions. Results indicate that increasing canopy turbulence corresponds to increased strength of vertical oscillatory motion and variability in the plume trajectory/position. Furthermore, we find that the canopy coherent structures characterized at the plume source set the intensity and frequency at which the plume oscillates. These perturbations then move longitudinally along the length of the plume at the speed of the free stream velocity. However, the buoyancy developed by the plume can resist this impact of the canopy structures. Due to these competing effects, the oscillatory behavior of plumes in canopy systems is observed more significantly in systems where the canopy turbulence is dominant. These effects also have an influence on the mixing and entrainment of the plumes. We offer scaling analyses to find flow regimes in which canopy induced turbulence would be relevant in plume dynamics.

  • Stanford AI Scholars Find Support for Innovation in a Time of Uncertainty
    Nikki Goth Itoi
    Jul 01
    news

    Stanford HAI offers critical resources for faculty and students to continue groundbreaking research across the vast AI landscape.