Skip to main content Skip to secondary navigation
Page Content

AIMI-HAI Partnership Grant

The AIMI-HAI Partnership Grant is designed to fund new and ambitious ideas that reimagine artificial intelligence in healthcare, using real clinical data sets, with near term clinical applications. Visit the Call for Proposals page for criteria and eligibility. If you have any questions, please contact us at hai-grants@lists.stanford.edu.

 

2023 - 2025 Grant Projects:

  • Medical imaging encompasses various modalities, such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), which generate imaging data with distinct content and image characteristics. Medical experts consider results from several imaging studies to establish a diagnosis or treatment plan. Modern machine learning algorithms for medical image analysis perform well on tasks that are limited to a single imaging modality or contrast. However, these algorithms face limitations when processing imaging data that includes different modalities.

    In this project, we aim to address this limitation by developing machine learning algorithms that can translate between different medical imaging modalities. We will base our work on diffusion models - a class of machine learning models successfully used for the generation and analysis of image data in various domains. With this project, we expect to open new possibilities in machine learning-based processing and analysis of medical imaging data and to build algorithms that are accessible for a broader range of clinical situations and a larger number of patient groups.

    Name

    Role

    School

    Department

    Sergios Gatidis

    Main-PI

    School of Medicine

    Radiology

    Stefano Ermon

    Co-PI

    School of Engineering

    Computer Science

  • We seek to develop a "CBT-AI Companion," an LLM-based application designed to enhance mental health treatment. Addressing the prevalent but undertreated issues of depression and anxiety, the project aims to increase the effectiveness of psychotherapy, particularly cognitive behavioral therapy (CBT). Traditional methods often see low compliance in practicing therapy skills, which are crucial for treatment success. The proposed application leverages large language models (LLMs) to support patients in practicing cognitive and behavioral skills, offering immediate feedback and personalized experiences based on the patient’s context and stressors. This approach is expected to improve clinical outcomes due to a stronger engagement in skill practice.

    Recognizing the complexity and risks associated with AI in psychotherapy, the project proposes a cautious, staged integration of AI, with clinician oversight on tasks generated by the AI and the patients' responses. The application will focus on skills targeting depression and anxiety, such as cognitive reappraisal, activity planning, and exposure. The development process involves leveraging large clinical datasets to tailor the AI, followed by phases of testing for safety, feasibility, usability, and clinical utility, before careful deployment with patients in a third phase. The project aims to provide a model for AI-powered mental health applications and inform the careful integration of LLM-supported psychotherapy into routine care.

    Name

    Role

    School

    Department

    Johannes Eichstaedt

    Main PI

    School of Humanities and Sciences

    Psychology

    Shannon Wiltsey Stirman

    Co-PI

    School of Medicine

    Psychiatry and Behavioral Sciences

  • Surgical interventions are a major form of treatment in modern healthcare, with open surgical procedures being the dominant form of surgery worldwide. Surgeon skill is a key factor affecting patient outcome, yet current methods for assessing skill are primarily qualitative and difficult to scale. Our project endeavors to make strides in developing AI as an engine for automated skill assessment in open surgical procedures. Whereas most prior work has focused on AI for laparoscopic surgical procedures, open procedures present more challenges due to the larger and more complex field of view. We will develop methods for providing complementary forms of feedback from surgical video, including kinematics analysis and action quality assessment through video question answering. Finally, we will evaluate the utility of our AI methods through pilot studies with surgical trainees. Our project aims to demonstrate the feasibility of AI in contributing quantitative and scalable skill assessment and feedback in surgical education.

    Name

    Role

    School

    Department

    Serena Yeung

    Main PI

    School of Medicine

    Biomedical Data Science

    Gabriel Brat

    Co-PI

    External Collaborator, Harvard

    Beth Israel Deaconess Medical Center

    Teodor Grantcharov

    Co-PI

    School of Medicine

    General Surgery

  • This project focuses on Pupper, an AI-enabled robotic dog developed at Stanford, aimed at improving the hospital experience for pediatric patients facing social isolation, depression, and/or anxiety. Unlike traditional quadrupeds, Pupper is approachable, cost-effective, and safe, making it well-suited for child interaction. It offers an engaging alternative to conventional sedation methods, potentially reducing healthcare costs and medication risks. Pupper, with its computer vision and agility capabilities, has shown promise to also serve as physical therapy motivator and emotional support. This research will progress along two parallel paths: technical enhancement of Pupper, including AI advancements like computer vision, autonomous gait, and speech processing, and clinical studies assessing Pupper’s impact in pediatric care. These studies will focus on mitigating social isolation, and reducing anxiety and/or depression, and facilitating physical therapy participation among hospitalized children.

    Name

    Role

    School

    Department

    Karen Liu

    Main PI

    School of Engineering

    Computer Science

    Teresa Nguyen

    Co-PI

    School of Medicine

    Anesthesiology, Perioperative and Pain Medicine

 

2021 - 2023 Grant Projects:

  • This project will advance the development of artificial intelligence (AI) to identify patients at risk for ST-segment elevation myocardial infarction (STEMI). Screening patients upon arrival in an emergency department is done to identify those who potentially have this most severe form of a heart attack. Diagnosing STEMI needs to occur within 10 minutes. The project team will improve current practice by pursuing the integration of AI, designed to replicate physician decision making, into STEMI screening. The execution of the project brings together 3 areas of expertise – emergency cardiovascular care, clinical informatics and predictive modeling analytics. The first phase of work will quantify the value of socio-demographic diversity characteristics in augmenting the sensitivity and precision of risk prediction. In addition, the team will silently pilot the screening model as physician AI within the Stanford Adult Hospital’s electronic health record (EHR) system. The AI will use live clinical care data for 6 months. The team will then measure the timeliness of the physician AI’s decision making, and the effectiveness of risk prediction in comparison to actual clinical care screening. This work explores the feasibility of a mechanistic approach to translating physician AI into the clinical environment in order to improve timely diagnosis for a time-sensitive medical condition.

    NameRoleSchoolDepartment
    Maame Yaa YiadomPISchool of MedicineEmergency Medicine
    Ian BrownCo-PISchool of MedicineEmergency Medicine
  • Comprehensive genome profiling of tumor specimens is an important new instrument in the diagnosis and treatment of cancer. With an increasing number of available molecular testing options, it can be difficult to choose the most relevant tests from the available test menu. Machine learning tools promise to be a new and important source of information for oncologists to make the best choice for their patients. For the Heme-STAMP tumor profiling test as an example application, we are developing a prediction tool that uses patient-specific data available in the electronic health record to predict how likely the molecular test will yield new and actionable results. This information will be presented in real-time to the ordering provider so that it can be used to select the most relevant test for the patient.

    NameRoleSchoolDepartment
    Henning StehrPISchool of MedicinePathology
    Dita GratzingerCo-PISchool of MedicinePathology
    Jonathan ChenCo-PISchool of MedicineBiomedical Informatics
  • Early detection of chronic disorders can improve population-level quality of life, longevity, and health care costs. While multiple screening tests for chronic disorders exist, these can have low compliance, add to healthcare costs, and be insensitive to early-stage diseases when interventions may be most effective. To address this, we will implement a solution for diagnosing ischemic heart disease, diabetes mellitus, and osteoporosis, using abdominal computed tomography (CT) scans that have already been acquired for additional reasons. Such CT scans can provide salient biomarkers such as the distribution of fat and muscle within the body, vascular calcifications, and bone mineral density measures, all of which are biomarkers of future disease activity. We will combine these images with a patient’s medical record and build explainable models for communicating model risks to both clinicians and patients. This high-value paradigm of opportunistic analysis using already-acquired imaging has the potential to improve patient outcomes without requiring additional testing or adding to the already burgeoning costs of healthcare.

    NameRoleSchoolDepartment
    Akshay ChaudhariPISchool of MedicineRadiology
    Marc WillisCo-PISchool of MedicineRadiology
    Daniel RubinCo-PISchool of MedicineBiomedical Data Science and Radiology
    David MaronCo-PISchool of MedicineCardiovascular Medicine
    Curtis LanglotzCo-PISchool of MedicineBiomedical Data Science and Radiology
    Robert BoutinCo-PISchool of MedicineRadiology
  • Substantial literature demonstrates the significance of the human-made environment on key health behaviors and outcomes. However, most studies have been based on large-scale geographic (GIS) measures, which typically do not represent the local context in which individuals regularly interact with their environments. Evidence has emerged that the streetscape can affect health outcomes and disparities. Traditional streetscape audits require researchers to walk through an environment and manually classify features; however, this approach is time-consuming and relies on accurate and reliable human judgment. The emergence of widespread maps that feature panoramas of the environment (e.g., Google Street View) offers unprecedented opportunity for measuring streetscape features at the perspective from which individuals interact with their environment. Coupled with deep learning methods to extract features, this approach will overcome the limitations of the traditional streetscape audit. The overarching hypothesis of this work is that the presence of positive streetscape features can help enhance health. These types of features, such as lighting, safe pedestrian paths, and greenspace, may be especially important in under-resourced communities with high levels of health disparities. The proposed research will be conducted in collaboration with a population-derived cohort of African Americans living in the Deep South. Employing innovative human-centered artificial intelligence and computer vision methods, we will evaluate whether patterns of streetscape features are associated with physical activity, well-being, and chronic disease, independent of traditional risk factors and GIS-based measures.

    NameRoleSchoolDepartment
    Michelle OddenPISchool of MedicineEpidemiology and Population Health
    Abby KingCo-PISchool of MedicineEpidemiology
    Sherri RoseCo-PISchool of MedicinePrimary Care and Outcomes Research
    Jiajun WuCo-PISchool of EngineeringComputer Science
  • Imaging tests are essential for diagnosing cancers in children and for monitoring tumor response to therapy. New technologies enable simultaneous acquisition of positron emission tomography (PET) and magnetic resonance imaging (MRI) images, which allows for “one stop” cancer staging. However, the interpretation of 30,000 – 50,000 images generated with the PET/MRI technology is time consuming and prone to variability from one observer to another. In children with lymphoma, tumor response to chemotherapy is typically expressed by a 5-point score (the “Deauville score”) that describes the tumor signal on PET scans as being higher or lower compared to the signal of major blood vessels and the liver. Human observers tend to show limited reproducibility of intermediate scores of 2, 3 or 4. We propose to solve this problem by developing deep convolutional neural networks (Deep-CNN) that can accelerate and standardize pediatric PET/MR image data interpretation. The goal of our project is to develop a Deep-CNN for standardized Deauville scoring of lymphomas in children. We hypothesize that Deep-CNN can significantly (> 50%) speed up PET image interpretation times and improve the reproducibility of Deauville score assessments. To the best of our knowledge, this is the first attempt to apply Deep-CNN to interpretations of pediatric cancer imaging studies. Results will be readily translatable to the clinic and thereby, will have major and broad health care impact. Despite the obvious need of accelerated medical diagnoses for children with cancer, no current strategy has yet employed the use of Deep-CNNs to speed up and reduce variability in image data interpretation of children with cancer. This is because Deep-CNNs need to be trained on large data sets to achieve satisfactory performance. Since pediatric cancers are more rare than adult cancers and PET/MRI technologies are relatively new, there are limited PET/MRI data of children available to date. We are in a unique position to address this problem because we have established a centralized image registry with PET/MRI data of pediatric cancer patients from five major children’s hospitals. This will enable us to train and validate a Deep-CNN for therapy response assessments of pediatric cancers. Once established, our Deep-CNN can be made available to other institutions and cross-trained for other tumor types and adult patients.

    NameRoleSchoolDepartment
    Heike E. DaldrupPIProfessor of Radiology (General Radiology) and, by courtesy, of PediatricsRadiology
    Daniel RubinCO-PIProfessor of Biomedical Data Science and of Radiology (Integrative Biomedical Imaging Informatics at Stanford), of Medicine (Biomedical Informatics Research) and, by courtesy, of Ophthalmology and of Computer ScienceBiomedical Data Science and Radiology
  • Analysing electronic health records (EHR) with machine learning holds great promise in tackling key problems in healthcare. However the scale, complexity, and heterogeneity of EHR data creates challenges for integrating these data into machine learning models. Current data science tools for EHRs largely focus on count-based models using structured data (e.g., medical codes, labs, demographics) and fail to capture critical information found in text and images. Moreover cohort sizes are typically small, failing to capture generalizable signals found across larger-scale patient populations. The inability to easily create feature representations that contextualize patient state and capture the full richness of EHR data directly impacts almost all clinical data science applications. Building on our prior work training foundation models using structured data from the entire Stanford Medicine patient population, we will develop a multimodal patient representation learning framework which combines structured EHR codes, clinical notes, and images. We will evaluate classifiers trained with our embeddings on three cohorts: pediatric sepsis patients presenting within 3 days of admission; patients diagnosed with pulmonary embolism, and CheXpert. This foundation model will be integrated into our patient search engine and cohort analysis tool, the Advanced Cohort Engine (ACE). We hypothesize that patient embeddings generated with multimodal data will improve classification performance across a range of clinical tasks, drive new insights via latent subclass analyses, and enable new modes of error analysis for clinical researchers. All of our code will be released as open source software and include guidance on using GCP infrastructure to train custom cohorts and include estimates for training time, cloud costs, and carbon footprint.

    NameRoleSchoolDepartment
    Keith MorsePISchool of MedicinePediatrics
    Jason Friesco-PISchool of MedicineMed/BMIR