Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
HAI and AIMI Partnership Grant | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

researchGrant

HAI and AIMI Partnership Grant

Status
Closed
Topics
Design, Human-Computer Interaction
Overview
2023 - 2025 Grant Recipients
2021 - 2023 Grant Recipients
Overview
2023 - 2025 Grant Recipients
2021 - 2023 Grant Recipients
Share
Link copied to clipboard!

Medical imaging encompasses various modalities, such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), which generate imaging data with distinct content and image characteristics. Medical experts consider results from several imaging studies to establish a diagnosis or treatment plan. Modern machine learning algorithms for medical image analysis perform well on tasks that are limited to a single imaging modality or contrast. However, these algorithms face limitations when processing imaging data that includes different modalities.

In this project, we aim to address this limitation by developing machine learning algorithms that can translate between different medical imaging modalities. We will base our work on diffusion models - a class of machine learning models successfully used for the generation and analysis of image data in various domains. With this project, we expect to open new possibilities in machine learning-based processing and analysis of medical imaging data and to build algorithms that are accessible for a broader range of clinical situations and a larger number of patient groups.

Name

Role

School

Department

Sergios Gatidis

Main-PI

School of Medicine

Radiology

Stefano Ermon

Co-PI

School of Engineering

Computer Science

We seek to develop a "CBT-AI Companion," an LLM-based application designed to enhance mental health treatment. Addressing the prevalent but undertreated issues of depression and anxiety, the project aims to increase the effectiveness of psychotherapy, particularly cognitive behavioral therapy (CBT). Traditional methods often see low compliance in practicing therapy skills, which are crucial for treatment success. The proposed application leverages large language models (LLMs) to support patients in practicing cognitive and behavioral skills, offering immediate feedback and personalized experiences based on the patient’s context and stressors. This approach is expected to improve clinical outcomes due to a stronger engagement in skill practice.

Recognizing the complexity and risks associated with AI in psychotherapy, the project proposes a cautious, staged integration of AI, with clinician oversight on tasks generated by the AI and the patients' responses. The application will focus on skills targeting depression and anxiety, such as cognitive reappraisal, activity planning, and exposure. The development process involves leveraging large clinical datasets to tailor the AI, followed by phases of testing for safety, feasibility, usability, and clinical utility, before careful deployment with patients in a third phase. The project aims to provide a model for AI-powered mental health applications and inform the careful integration of LLM-supported psychotherapy into routine care.

Name

Role

School

Department

Johannes Eichstaedt

Main PI

School of Humanities and Sciences

Psychology

Shannon Wiltsey Stirman

Co-PI

School of Medicine

Psychiatry and Behavioral Sciences

Surgical interventions are a major form of treatment in modern healthcare, with open surgical procedures being the dominant form of surgery worldwide. Surgeon skill is a key factor affecting patient outcome, yet current methods for assessing skill are primarily qualitative and difficult to scale. Our project endeavors to make strides in developing AI as an engine for automated skill assessment in open surgical procedures. Whereas most prior work has focused on AI for laparoscopic surgical procedures, open procedures present more challenges due to the larger and more complex field of view. We will develop methods for providing complementary forms of feedback from surgical video, including kinematics analysis and action quality assessment through video question answering. Finally, we will evaluate the utility of our AI methods through pilot studies with surgical trainees. Our project aims to demonstrate the feasibility of AI in contributing quantitative and scalable skill assessment and feedback in surgical education.

Name

Role

School

Department

Serena Yeung

Main PI

School of Medicine

Biomedical Data Science

Gabriel Brat

Co-PI

External Collaborator, Harvard

Beth Israel Deaconess Medical Center

Teodor Grantcharov

Co-PI

School of Medicine

General Surgery

This project focuses on Pupper, an AI-enabled robotic dog developed at Stanford, aimed at improving the hospital experience for pediatric patients facing social isolation, depression, and/or anxiety. Unlike traditional quadrupeds, Pupper is approachable, cost-effective, and safe, making it well-suited for child interaction. It offers an engaging alternative to conventional sedation methods, potentially reducing healthcare costs and medication risks. Pupper, with its computer vision and agility capabilities, has shown promise to also serve as physical therapy motivator and emotional support. This research will progress along two parallel paths: technical enhancement of Pupper, including AI advancements like computer vision, autonomous gait, and speech processing, and clinical studies assessing Pupper’s impact in pediatric care. These studies will focus on mitigating social isolation, and reducing anxiety and/or depression, and facilitating physical therapy participation among hospitalized children.

Name

Role

School

Department

Karen Liu

Main PI

School of Engineering

Computer Science

Teresa Nguyen

Co-PI

School of Medicine

Anesthesiology, Perioperative and Pain Medicine