Skip to main content Skip to secondary navigation
Page Content

35 Stanford Research Teams Receive 2023 HAI Seed Grants and HAI-AIMI Partnership Grants

Stanford HAI awards $2.2 million for early-stage researchers to pursue new ideas in AI-related fields 

The Stanford Institute for Human-Centered AI (HAI) will fund 31 new AI projects covering a wide range of topics, from equity/bias and accessibility to health care, sustainability, and robotics, in the 2023/24 academic year. This year’s HAI seed grant recipients come from all seven Stanford schools and represent more than 30 departments across the university. Now in its sixth year, the grant program, supported by Dalio Philanthropies, is succeeding in its mission to bring faculty together across disciplines to share knowledge and explore the frontier of AI technology and applications.

View a full list of the latest seed grant recipients

 

Stanford HAI’s research programs seek to develop AI that is collaborative, augmentative, and enhancing to human productivity and quality of life. To date, more than $13 million has been committed to early-stage AI research and over 360 faculty have been funded across all its grant programs. The diversity of connections among disciplines continues to grow and expand.   

“We couldn’t be more excited about the potential of this year’s cohort,” says HAI director of research programs, Vanessa Parli. “All of these teams are working on early, innovative, and interdisciplinary AI research with the goal of making sure the technology benefits humanity.”

graphs showing increase of multidisciplinary faculty funding from HAI

 

Each year, the seed grant committee encourages researchers from an array of backgrounds to apply, welcoming submissions from humanistic, social scientific, natural scientific, biomedical, and engineering approaches. The institute aims to support new, ambitious, and speculative ideas with the objective of getting initial results. In addition, proposals must support one of HAI’s three research pillars: human impact, augmenting human capabilities, and intelligence. 

Below are a few highlighted projects across key research themes, selected from the 31 teams that earned seed grant funding in this round:

Leveraging AI To Assess the State and Impact of Racial Representation in Television

Over the last decade, major television networks and advertisers have invested significantly in efforts to provide content that reflects the diversity of communities and experiences in the U.S. But we don’t yet know whether this increase in investment translates to real onscreen racial representation or improved racial attitudes among Americans.

Social psychologist Jennifer Eberhardt and colleagues propose to pair natural language processing (NLP) methods with social psychological experiments to measure racial representation in TV scripts and test how these features affect viewers’ racial attitudes. 

The researchers have begun to obtain scripts through a collaboration with Paramount and Black Entertainment Television (BET) with the goal of producing evidence-based principles and AI tools

that empower industry stakeholders to create diverse TV that can mitigate racial prejudice among viewers.

Using Scientific Machine Learning To Determine Ecosystem Resilience from Time Series Data

Ecologists are concerned that the fate of ecosystems is uncertain given global changes in climate, and traditional approaches to managing and restoring fragile ecosystems may be inadequate in the future. A new direction of study focuses on ecological resilience, or how well an ecosystem handles disturbances such as warmer temperatures or an increase in toxins, while retaining its ability to function. But so far, it has been difficult to measure and quantify the concept of resilience.

As co-director of the Stanford Center for Ocean Solutions, marine ecologist and conservation biologist 

Fiorenza Micheli is interested in leveraging AI to quantify ecological resilience. This project focuses on two aquatic species that are confronting new environmental challenges: phytoplankton communities and kelp forests. The research team will use existing data collected from ongoing efforts to monitor both species to train a scientific machine learning (SciML) model. This approach enhances the capabilities of regular machine learning algorithms by incorporating theoretical knowledge of the systems that are being studied.

As the work progresses, the team expects to find combinations of environmental factors and community characteristics that result in different levels of resilience. Ultimately, they hope to create a platform that other teams can use to train their own SciML models, extending the abilities of ecosystem managers to quantify resilience for other species.

Personal and Private Ambient Intelligence for Senior Care 

Personal health aids (PHAs) are health care professionals who take care of older adults in their place of residence. They work long hours for low wages, have limited opportunities for career development, and are sometimes treated poorly by their clients and/or agencies. An interdisciplinary research team led by Professor Sarah Billington, an expert in human-building interaction, proposes to reimagine care work with the help of new AI tools.

The team is designing a personal and private ambient intelligence system consisting of two main components:

  • Sensor inference algorithms that can combine rich sensing modalities while preserving privacy; 
  • A voice assistant capable of asking medical questions in an intuitive manner, communicating with a patient’s care team, and delivering health and well-being interventions managed by the care team and informed by the captured data.

Initial seed grant funds will be used to extend the team’s field and survey work with older adults to include PHA workers and practices, build sensor inference algorithms for multi-modal sensing, and build and evaluate the project’s voice assistant. The team believes access to these technologies will provide PHAs with new job skills, improved working conditions, and career development opportunities, while at the same time improving efficiency and quality of care for clients.

Generating Fair Synthetic Patient Data for Large-Scale Biobank Data

Machine learning models have the potential to reveal clinical insights and improve patient outcomes; however, success depends on developing large-scale patient datasets. Gathering that data is time-intensive and opens privacy concerns. One alternative path: creating synthetic data from generative models that leverage both biobank data collected from scientific studies and electronic health record (EHR) data from individual patients. 

Professor Russ Altman and a team of Stanford bioengineers and computer scientists propose to develop a novel generative model that creates synthetic patient data and addresses two problems with current approaches: 

  1. Existing models focus only on EHR data, which is more limited in scope than biobank data that includes thousands of different features. 
  2. Patient datasets are often biased and imbalanced, whereas synthetic data could be optimized for minority representation.

The team plans to conduct the research on UK Biobank and All of Us, two large-scale biobanks that have separately collected health information, such as genomics, medical history, and physical measurements, from around a half-million participants each.

Generative Models for Enhancing Accessible Data Exploration

A team of experts in human-computer interaction, haptic and audio perception, and computational cognitive modeling wants to give blind and low vision (BLV) students better access to data visualizations that have become an integral part of learning and intuition. Led by Sean Follmer, a professor of mechanical engineering and director of the Stanford SHAPE Lab, the team recognizes that many BLV students lag behind in skills needed to interpret graphical information and that representation of BLV people in the workforce is disproportionately low.

Their project starts with development of a computational model to understand how BLV experts approach the data exploration process based on perceptual cues. The interaction data they collect from observing experts complete visualization tasks will be used to train a generative AI model capable of uncovering cognitive and perceptual insights.

In a second phase of the project, the researchers plan to investigate ways to integrate the model into an intelligent tutoring agent that adapts to the actions of novice learners and predicts actions to guide the students to more optimal exploration strategies.

Coordinating Collaborative, Multi-Agent Manipulation through Large Language Models

What if we could expand the collaborative abilities of multi-robot or human-robot teams to handle a diverse set of tasks in real-world scenarios? A project led by robotics expert and assistant professor of computer science Jeannette Bohg, director of the Stanford Interactive and Perception Lab, aims to develop advanced algorithms that can complete an array of tasks, from assembling furniture to changing bed sheets to setting up a tent.

“By leveraging collaborative manipulation techniques, a team of agents can work together to perform these tasks with ease, offering several advantages over individual agents,” Bohg said. “A team of agents can achieve faster task completion times; improved robustness through data fusion, information sharing, and redundancy; and greater reliability, flexibility, scalability, and versatility.” 

Bohg will explore multi-robot coordination with Professor Shuran Song, a faculty member in the electrical engineering department who directs the Robotics and Embodied Artificial Intelligence Lab (REAL) at Stanford. 

“This funding fosters interaction across campus, which often leads to novel insights and viewpoints on a problem,” Bohg added.

AI Research in Medicine and Health

Four additional research projects received a combined $800,000 in grants through a partnership between Stanford HAI and the Center for Artificial Intelligence in Medicine and Imaging (AIMI). These teams are researching innovative ways to transform how AI shapes health care. 

“We are so gratified by this rich collaboration between HAI and AIMI,” said Curt Langlotz, director of the AIMI Center.  “These grants support so many innovative teams as they develop new methods to improve health.”

To earn the grant award, each team had to explain how it would use a real clinical dataset and work toward a near-term clinical application with a well-defined and testable impact.

For example, a project titled, “Bridging the Modality Gap: Diffusion Implicit Bridges for Inter-Modality Medical Image Translation,” proposes to advance the application of machine learning and computer vision for medical imaging analysis. Associate professor of radiology and project lead Sergios Gatidis explains that a key limitation with current clinical ML tools is that they operate on a single modality, such as MRI, CT, or PET1. The team plans to use its funding to develop algorithms capable of bridging the gap between medical imaging modalities. The researchers hope to provide a path toward general-purpose, multi-modality ML models for this field.

“This project is a crucial component of our effort toward highly capable machine learning models for medical image analysis,” Gatidis said. “Our work represents a significant step in the development of clinical AI tools that can be deployed across a wide range of tasks and made accessible to a diverse patient population.” 

The full list of 2023 HAI Seed Grant recipients is available here. And projects funded through this year’s HAI and AIMI partnership are listed here.

Stanford HAI's seed program has been graciously supported by Dalio Philanthropies for the past four years. 

Learn more about Stanford HAI grant programs.