Skip to main content Skip to secondary navigation
Page Content

HAI & Wu Tsai Neuro Partnership Grant

Stanford HAI and the Wu Tsai Neurosciences Institute jointly seek proposals that transform our understanding of the human brain using AI and advance the development of intelligent technology. We especially aim to fund proposals that can make a persuasive case that initial results will catalyze further support from internal and external stakeholders. We expect to award four to six one-year grants, up to $125,000 each.

2022 Grant Recipients

  • Stroke is the third-leading cause of death and disability combined in the world. The estimated global cost of stroke is over US$721 billion—0.66% of the global GDP. The primary method to induce motor recovery in stroke patients involves active motor training via physical and occupational therapies; however, these treatments are unsatisfactory. Robotics rehabilitation with brain-computer interface (BCI) and virtual reality (VR) can improve the efficacy of therapy as it will involve the active participation of patients’ brains during rehabilitation sessions. Several such systems have been developed; however, the underlying hardware and signal processing algorithms remain challenging. To address these challenges, we propose a radical solution of combining brain-computer interface and augmented reality (AR) into a single rehabilitation platform. We propose to use steady-state visual evoked potentials (SSVEPs) as inputs to the BCI and action observation (AO) implemented via AR-based visual feedback to overcome major limitations of current BCI-based approaches. The proposed BCI-AR-based rehabilitation system has the potential to revolutionize future stroke treatment both in clinics and at home.

    Name

    Role

    School

    Department

    Ada Poon

    Main PI

    School of Engineering

    Electrical Engineering

    Monroe Kennedy

    Co-PI

    School of Engineering

    Mechanical Engineering

    Maarten Lansberg

    Co-PI

    School of Medicine

    Neurology

  • An average adult speaks approximately 16,000 words per day, using verbal communication to build and maintain relationships, meet basic needs, navigate safely, and work. Approximately 10% of the US adult population reports a communication disorder, with severe disease preventing vocalized speech altogether. Although Augmentative and Alternative Communication technology is often used by people with communication disorders. Existing systems are strongly limited in performance, inhibiting participation in spoken conversation, and creating an urgent need for improvement. Our approach uses grids of high-density surface electromyography (HD-sEMG) channels, embedded on a soft, conformable substrate, to enable close adhesion to the face during speech production and widespread coverage of articulator muscles. This enables us to infer wearer intentions with high accuracy. By combining novel materials science with modern machine learning, we aim to push HD-sEMG capabilities significantly beyond prior work and enable new forms of human-computer interaction. 

    Name

    Role

    School

    Department

    Zhenan Bao

    Main PI

    School of Engineering

    Chemical Engineering

    Shaul Druckmann

    Co-PI

     School of Medicine

    Neurobiology

    Krishna Shenoy

    Co-PI

    School of Engineering

    Electrical Engineering

  • An assembly of neurons encodes information in a sequence of spikes. Axons from this assembly deliver its spike sequence to a short stretch of dendrite. How this stretch decodes the encoded information is unknown. This project will mine a microscale reconstruction of a millimeter-cube of brain tissue for anatomical signatures of sequence-decoding. These signatures were predicted by a computational model of a dendrite developed by us. It responds only when a sequence’s spikes activate its synapses consecutively, from one end of the stretch to the other. It makes a testable prediction: When branches of axons carrying a sequence contact two stretches of dendrite, they will synapse onto those stretches in the same order. Confirming this prediction will unravel how axon branches and dendrite stretches are organized at the microscale. That would reveal how biological neural nets operate with far fewer signals than artificial neural nets. This sparse signaling saves energy. That would enable AI chips to become 3D—like the brain. 

    Name

    Role

    School

    Department

    Kwabena Boahen

    Main PI

    School of Engineering

    Bioengineering

    Andreas Tolias

    Co-PI

    School of Medicine

    Ophthalmology

  • We can easily monitor physiological signals like heart rate and respiration to track physical diseases using wearable sensors, but what about tracking decline in attention, memory and other cognitive skills that can occur with neurodegenerative diseases like Parkinson’s? This project aims to measure and model human looking-behavior during daily life to track cognitive decline in Parkinson’s patients. Why looking-behavior? Because, paying attention to where one looks, can reveal quite a lot about what they may be thinking. We will use deep-learning with a transformer architecture to predict where Parkinson’s patients will look next based on what they are looking at and their previous fixations. We expect that models built on different types of Parkinson’s patients and control groups will be able to differentiate subtle differences in looking behaviors. The long-term goal of the project is to use looking behavior modeling as a foundation for minimally-invasive and sensitive measures for diagnosing and tracking neurodegenerative diseases.

    Name

    Role

    School

    Department

    Justin Gardner

    Main PI

    School of Humanities and Sciences

    Psychology

    Leila Montaser Kouhsari

    Co-PI

    School of Medicine

    Neurology