Skip to main content Skip to secondary navigation
Page Content

2021 Seed Grant Recipients

The Seed Research Grants are designed to support new, ambitious, and speculative ideas with the objective of getting initial results.


AI Accountability in Practice

Riitta Katila, Angele Christin

Civics Education for a Just and Sustainable Future

Patricia Bromley

Computational Tools For Algorithmic Impact Assessments

Christopher Re, Daniel Ho

Creating and understanding dynamic predictive world models: from brains to machines

Surya Ganguli, Mark Schnitzer

Decision Systems that Reflect Minority Perspectives

Tatsunori Hashimoto, Michael Bernstein, Jeffrey Hancock

Development of a low-cost cellphone based self-test for seroprevalence monitoring

Manu Prakash, Peter Kim

How AI Can Affect Consumers When Firms Have Access to Personal Data (and How Competition Can Help)

Patrick Kehoe, Brad Larsen

Humanities AI Network (humain)

Elaine Treharne, Mark Algee-Hewitt, Anna Bigelow, Angele Christin, Shane Denson, Stephen Monismith, Ge Wang

In Touch with Causation

Sean Follmer, Jeannette Bohg, Tobias Gerstenberg

In(advert)ent: Investigating and countering disparities in race and gender representation in online advertising

Jeffrey Hancock, James Landay

Interpretable learned features improve RNA isoform assignment and power cardiac regeneration

Lars Steinmetz, Tsachy Weissman

Implicit Representations for Robotics, Medical Imaging, and Interactive Design

Gordon Wetzstein, Jeanette Bohg, Jiajun Wu

Missingness in Action: A Stanford Conference on the Absence of Data and the Future of AI in Healthcare

Christian Rose, Italo Brown, Michael Gisondi

Probing network computation through integration of artificial and biological neural networks

Paul Nuyujukian

Prototyping an Intelligent Agent for CO2 Sequestration in Saline Aquifers

Jef Caers, Sally Benson, Mykel Kochenderfer, Tapan Mukerji

Rating Systems And The Future of Algorithmic Worker Evaluation

Michael Bernstein, Ramesh Johari, Margaret Levi

Self-Supervised Representation Learning Of Screen Data To Facilitate Behavioral And Reinforcement Learning Models

Nick Haber, Nilam Ram, Byron Reeves, Thomas Robinson

Stanford Center For AI Safety Postdoctoral Fellowship

Clark Barrett, Mykel Kochenderfer, Dorsa Sadigh

Toward Machine Models of Structured Human Reasoning through Explanation-Based Meta-Learning

Chelsea Finn, Jay McClelland

Ultra-Fast MRI For PrecisionRadiotherapy Using Physics-Aware Deep Learning

Lucas Vitzthum, Daniel Chang, Lianli Liu, John Pauly, Lei Xing

Understanding Real-World Human Behaviors via Embodied, Interactive Virtual Reality

Jiajun Wu, Hyowon Gweon, Nick Haber

Using Wearable Electrodermal Activity (EDA) Sensors To Identify ADHD Biotypes

Leanne Williams, James Landay, John Leikauf

What Conception of the “Human” Grounds Human-Centered Artificial Intelligence? A Cultural Framework for Equitable Development of Artificial Intelligence Across the Globe

Hazel Markus, Brian Lowery

 

AI Accountability in Practice

This project examines the concrete organizational roadblocks shaping the implementation of FATE (fairness, accountability, transparency, and ethics) values as the technology industry designs and implements AI systems. Few existing studies tie ethical issues around AI systems to how firms implement these systems at scale, and the frictions that firms encounter depending on the domain and procedure for implementation. Our project aims to fill these critical gaps by drawing on in-depth qualitative research to examine “AI accountability in practice,” focusing not only on the internal features of FATE models but also on the details of their uses, interpretations, and circulation across departments and companies. To do so, we rely on a compare-and-contrast approach, analyzing the current efforts to implement FATE values in the light of historical cases where industries such as aviation, healthcare, and corporate social responsibility sought to address comparable issues, with different outcomes. In parallel, we draw on interviews and content analysis to map the range of FATE strategies and documentations currently implemented in the technology sector. We complement this review with in-depth case studies of technology companies. Through this structured comparison of the main strategies developed to implement such values across domains and periods, we hope to document what is new – and what is not – in the organizational hurdles and constraints that technology firms face when addressing the challenges of AI systems in terms of fairness, accountability, transparency, and ethics.

Name

Role

School

Department

Riitta Katila

PI

School of Engineering

Management Science and Engineering

Angele Christin

Co-PI

School of Humanities and Sciences

Communication

Back to top of page      

Civics Education for a Just and Sustainable Future

Our research brings the tools of AI to bear on a pressing challenge in today’s world – how to build a stronger, more just democracy that embraces sustainable development. One mechanism for building a just and sustainable future is to re-imagine how we teach youth to become citizens. For many decades, investment in civics and history education has languished behind attention to science, technology, engineering, and math. The result is that we have little systematic knowledge to help us understand what students learn about becoming good citizens. However, AI technologies afford an opportunity to contribute to the greater good by revealing what students learn about becoming citizens. We will use Natural Language Processing (NLP) to provide a comprehensive assessment of the content of history and civics textbooks; documenting textbook depictions of diversity, equity, and sustainability; developing and testing arguments to explain why textbook content varies; and adapting existing NLP methods for the domain of textbook data. Understanding and changing textbook content is one important lever in the multi-faceted process required to redesign history and civics education. In recent years, issues of growing income inequality, persistent racial injustice, and increasingly devastating climate disasters have taken center stage in public discourse. As a result, there is a unique window of opportunity to create positive social change in history and civics education.

Name

Role

School

Department

Patricia Bromley

PI

Graduate School of Education

Graduate School of Education

Back to top of page      

Computational Tools For Algorithmic Impact Assessments

Algorithmic impact assessments (AIAs) have emerged as a promising framework for auditing and regulating automated decision systems. In an AIA, the developer of an automated decision system studies the system’s effects on its users, including potential implications for fairness, justice, privacy, or bias. However, a major barrier to the adoption of AIAs is the lack of clarity on what empirical evaluations AIAs consist of and how practitioners should implement them. Our work seeks to alleviate this challenge by developing open source tools for scalably implementing AIAs and understanding how the computational principles underlying ML evaluation should inform regulatory and policy guidelines.

Name

Role

School

Department

Christopher Re

PI

School of Engineering

Computer Science

Daniel Ho

Co-PI

School of Law

School of Law

Back to top of page      

Creating and understanding dynamic predictive world models: from brains to machines

Name

Role

School

Department

Surya Ganguli

PI

School of Humanities and Sciences

Applied Physics

Mark Schnitzer

Co-PI

School of Humanities and Sciences

Biology and Applied Physics

Back to top of page      

Decision Systems that Reflect Minority Perspectives

How do we build artificial intelligence systems that reflect our values? Current algorithmic approaches rely upon the problematic assumption that there is a single consensus answer that AI systems should imitate. In social computing systems such as Facebook, Wikipedia, and Twitter as well as classification tasks ranging from content moderation to hate speech detection to misinformation detection (Borkan et al. 2019; Zhou and Zafarani 2018), there often exists fundamental disagreements between majority and minority groups on what the correct labels ought to be. Unfortunately, the ground truth labels used to train these AIs are typically determined by a majority vote of a small handful of labelers (Muller et al. 2021), often resulting in the perspective of the most numerous group overriding other groups in determining the ground truth label for training and evaluation. The resulting AIs appear to have excellent performance on held-out test sets (Gordon et al. 2021), and then launch to the public—where they fail marginalized groups. Our project asks: how might we rearchitect AI systems for situations where irreconcilable disagreements between groups make it difficult to define a ‘ground truth’ label?

Name

Role

School

Department

Tatsunori Hashimoto

PI

School of Engineering

Computer Science

Michael Bernstein

Co-PI

School of Engineering

Computer Science

Jeffrey Hancock

Co-PI

School of Humanities and Sciences

Communication

Back to top of page      

Development of a low-cost cellphone based self-test for seroprevalence monitoring

Vaccination is fundamental for ending the current coronavirus pandemic. However, vaccines are limited in most places around the world, and, with asymptomatic cases suspected to be a significant proportion of the population (Vogl T, Leviatan S, Segal E, 2021), it is broadly unknown who is protected and who is not. Important questions also remain regarding the true spread of the pandemic and the correlation of an antibody response to risk of reinfection, COVID-19 disease severity, and to other diseases. Fortunately, significant amounts of antibody against SARS-CoV-2 may persist for several weeks to months after infection (Grossberg et al. 2021) or vaccination, which allows for detection even after the virus is no longer detected by nucleic acid or antigen tests. Utilizing the power of a simple, instrument-free assay, machine learning and a cell phone - we propose to build a simple cellphone-based tool for microagglutination analysis that will allow anyone in the world to self-measure their antibody status.

Name

Role

School

Department

Manu Prakash

PI

School of Engineering

Bioengineering

Peter Kim

Co-PI

School of Medicine

Biochemistry

Back to top of page      

How AI Can Affect Consumers When Firms Have Access to Personal Data (and How Competition Can Help)

A major policy concern with the advent of Big Data and Artificial Intelligence is that firms in online markets with access to large amounts of sensitive information about individual consumers and with sophisticated machine learning tools may exploit this advantage to price discriminate against individual consumers by charging higher prices to consumers who are known to have a higher demand for a particular product. Concerns about negative ways in which consumers might be harmed from exploits of their personal data have led to congressional hearings and state and national policy proposals both in the U.S. and internationally. Still, little is known about how such policies should be designed to best protect consumers without hampering the progress of AI technologies that might actually benefit consumers. Our project aims to provide guidance on this front. We will use AI tools to estimate our model using a large database of proprietary consumer data and we will study a model of how the use of these tools by large firms will affect humans broadly speaking. Our key preliminary finding: AI uses of personal data can help consumers where there is sufficient competition between firms and when all firms have similar levels of access to consumers' information. Outside of these boundaries, consumers may be harmed. These preliminary findings suggest that regulators should pay particular attention to protecting competition in AI-intense consumer retail marketplaces, such as e-commerce.

Name

Role

School

Department

Patrick Kehoe

PI

School of Humanities and Sciences

Economics

Brad Larsen

Co-PI

School of Humanities and Sciences

Economics

Elena Pastorino

Co-PI

Hoover Institute

Dean of Research

Back to top of page      

Humanities AI Network (humain)

HUMAIN, the Humanities AI Network, is a Collaboratory that is dedicated to problem-solving through humanistic research using AI methods. Our focus is ‘The Uncertainty of Being(s) Recorded’. We’ll be hosting two major workshops in 2022 that will tackle ‘Lost and Found: Discoverability and Surveillance’, and ‘Bodies of Records: the Aesthetics of Archives and AI’. Involving faculty, research staff, graduates and undergraduates, our collaboration tackles the most urgent problems in how data—from the earliest days of the human record into the future of predictive processes of recording—both reflect and contribute to human experience and effort. Our concerns involve ways in which the record shapes a version of reality; how it oppresses, privileges, and gives voice to dominant groups in society. Whose reality is represented in the data? How are subjects represented? And who reads, watches, and secures that data into the future?

Name

Role

School

Department

Elaine Treharne

PI

School of Humanities and Sciences

English

Mark Algee-Hewitt

Co-PI

School of Humanities and Sciences

English

Anna Bigelow

Co-PI

School of Humanities and Sciences

Religious Studies

Angele Christin

Co-PI

School of Humanities and Sciences

Communication

Shane Denson

Co-PI

School of Humanities and Sciences

Art and Art History

Stephen Monismith

Co-PI

School of Engineering

Civil and Environmental Engineering

Ge Wang

Co-PI

School of Humanities and Sciences

Music

Back to top of page      

In Touch with Causation

Name

Role

School

Department

Sean Follmer

PI

School of Engineering

Mechanical Engineering

Jeannette Bohg

Co-PI

School of Engineering

Computer Science

Tobias Gerstenberg

Co-PI

School of Humanities and Sciences

Psychology

Back to top of page      

In(advert)ent: Investigating and countering disparities in race and gender representation in online advertising

Online advertising is pervasive and powerful, but remains understudied from the perspective of users. Unlike other forms of media, users have limited control over the ads they are shown, can be targeted based on potentially inaccurate or insensitive inferred attributes, and are consciously and unconsciously prompted to change their beliefs and behaviors by ads. Our team will build In(advert)ent, the first user-centered system to study race and gender biases in online advertising. Our system will allow us to understand the lived experiences of real internet users as they encounter repeated exposures to numerous independent, personalized ad delivery platforms that follow them across the web. With this user-centered, cross-platform, in-the-wild approach, we will observationally measure race and gender disparities in the content and audience of ads, and also experiment with interventions to change ad landscapes and measure their effect on users’ behaviors and beliefs.

Name

Role

School

Department

Jeffrey Hancock

PI

School of Humanities and Sciences

Communication

James Landay

Co-PI

School of Engineering

Computer Science

Back to top of page      

Interpretable learned features improve RNA isoform assignment and power cardiac regeneration

Heart disease is the leading cause of death world-wide, with replacement of the heart with a donor heart as the sole option available upon end-stage heart failure. An attractive alternative is transplantation of heart muscle cells, cardiomyocytes, derived from the patient’s own cells. This therapeutic strategy bypasses immunogenicity and the associated high risk of transplant rejection. While it holds great promise, there is still much to understand about the genetic and molecular basis of cardiomyocyte cell biology to ensure that regeneration, repair, and precision integration of cardiomyocytes into heart tissue is safe and effective. It requires an unprecedented level of research detail: for one, distinct and long species of RNA originating from the same gene, so-called isoforms, underlie the cell biology of cardiomyocytes. In fact, the two largest genes found in the human heart, DMD, coding for the protein dystrophin, and TTN, coding for titin, measure a stunning 2.3 mega base (Mb) and 0.3 Mb, respectively. For the former, that is close to one millimeter of DNA. The quantitative measurement of the complex RNA species that originate from these and other genes is painstakingly difficult. We aim to iteratively improve the design of molecular identifiers, short DNA sequences that allow us to trace tens of thousands of single molecules of RNA back to their origin: we use the human ability to recognize patterns in noisy environments to distill machine learned features that are translatable to the experimental setup. Separable, traceable identifiers allow us to measure long RNA species in single cardiomyocytes at the scale of the human heart. We anticipate that insights generated through these measurements will increase the precision and accuracy with which we can reprogram heart muscle cells, catalyzing the development of therapeutic strategies for heart disease.

Name

Role

School

Department

Lars Steinmetz

PI

School of Medicine

Genetics

Tsachy Weissman

PI

School of Engineering

Electrical Engineering

Back to top of page      

Implicit Representations for Robotics, Medical Imaging, and Interactive Design

Traditional discrete signal representations are fundamentally incompatible with many emerging AI techniques that require continuous, differentiable representations. In this project, we explore AI models using implicit representations with broad applications in robotics, 3D vision, medical imaging, graphics, and interactive design. These representations model multi-dimensional signals as multilayer perceptrons and offer continuous, differentiable signal representations that are suitable for end-to-end optimized application-domain-specific task performance. For example, in robotics applications this includes decision making based on a policy network jointly learned with the inferred scene representation. A key question for this project is how to learn distributions of functions that represent classes of signals, with the goal of allowing AI to reason about this function space to represent, generate, interpolate, edit, and compress signals in the aforementioned applications.

Name

Role

School

Department

Gordon Wetzstein

PI

Electrical Engineering

School of Engineering

Jeannette Bohg

Co-PI

Computer Science

School of Engineering

Jiajun Wu

Co-PI

Computer Science

School of Engineering

Back to top of page      

Missingness in Action: A Stanford Conference on the Absence of Data and the Future of AI in Healthcare

Machine learning offers great promise to narrow healthcare gaps and provide high quality care in America and across the globe. Its success is predicated on the ability to collect and utilize large volumes of data, leading to an unsurprising rapid growth in its volume, velocity, and voracity. However, more data does not necessarily mean better data. There are significant gaps in our data, as well as over- and under-representation of conditions or entire populations, presenting us with several problems: how do we currently control for missing information, how do we identify and then fill the gaps this missing data presents, and finally, how do we plan for the incorporation of data we do not yet measure? We propose a consensus conference of experts in AI, participatory research, and design to share their knowledge and set an agenda to address these complex and critically important issues related to missing data in data sets used for machine learning.

Name

Role

School

Department

Christian Rose

PI

School of Medicine

Emergency Medicine

Italo Brown

Co-PI

School of Medicine

Emergency Medicine

Michael Gisondi

Co-PI

School of Medicine

Emergency Medicine

Back to top of page      

Probing network computation through integration of artificial and biological neural networks

Despite substantial progress, science still lacks a firm understanding of brain functions such as memory, cognition, and movement control. It’s clear that these functions are performed by the coordinated activity of millions of neurons in large networks, but the computations that underlie these networks remain elusive. Advances in computing have led to the development of sophisticated artificial neural networks (ANNs), which are loosely modeled after biological neuronal networks (BNNs). While the underlying structure and individual components differ between ANNs and BNNs, they share some emergent network properties and both can solve complex computational problems. This project will assess the compatibility of ANNs and BNNs by attempting to integrate them to share information with the aim to support function. If successful, this proof-of-concept study will demonstrate the feasibility of ANNs to serve as an external support system for BNN function, representing an important breakthrough for next-generation implanted medical devices that treat brain disease.

Name

Role

School

Department

Paul Nuyujukian

PI

School of Medicine

Bioengineering and Neurosurgery

Back to top of page      

Prototyping an Intelligent Agent for CO2 Sequestration in Saline Aquifers

Only net-zero emission can stop global warming. Reaching net zero will be virtually impossible without carbon capture utilization & sequestration (CCUS). While the concept of subsurface CO2 sequestration is widely developed, actually operations remain limited to 21 commercially operating sites worldwide (in 2020). As the world starts pricing carbon, commercial sequestration will become increasingly economically viable. However, this does not solve the challenging technological aspect of storing CO2 in the subsurface, the speed at which this needs to be done, and the vast geographic diversity over which this needs to be achieved. Decisions will need to be made on what sites to select; decision on what data to acquire to make a final assessment of CO2 storage potential, then decisions on how to optimally operate such systems, such as deciding on well location and rates, monitoring or what additional infrastructure will need to be built to connect the CO2 source. Our project will develop a state-of-the-art Intelligent Agent by formulating sequential decision problems of this nature as Partially Observable Markov Decision Processes and developing practical solution methods in a real-world setting that can address the speed and urgency of needing to store CO2 in the subsurface.

Name

Role

School

Department

Jef Caers

PI

School of Earth, Energy and Environmental Sciences

Geological Sciences

Sally Benson

Co-PI

School of Earth, Energy and Environmental Sciences

Energy Resources Engineering

Mykel Kochenderfer

Co-PI

School of Engineering

Aeronautics and Astronautics

Tapan Mukerji

Co-PI

School of Earth, Energy and Environmental Sciences

Energy Resources Engineering

Back to top of page      

Rating Systems And The Future of Algorithmic Worker Evaluation

Increasingly, workers must rely on labor markets' rating systems, algorithmic systems that label workers with a one-to-five star rating. These AI-based systems are the technological bedrock of the platforms, and influence everything from how work is allocated, to the wages that workers can command, to the career paths that are open to them. These systems, however, are backwards-looking, only showcasing past behavior: workers who have no or few prior projects struggle to gain attention, and workers who wish to grow their career by acquiring new skills struggle to build a reputation in the new area. Our project seeks to develop a digital resumé for workers in online labor platforms that is forward-looking, drawing on market data to highlight how the worker's experiences prepare them for the next job they are interested in. This work brings together statistical methodology, algorithm design, human-computer interaction, and political economy to enable the same career pathways for online workers that we take for granted in offline work. We will introduce a novel approach based on empirical Bayesian estimation that "borrows" knowledge from other workers' past experiences, and avoids penalizing newcomers for a lack of prior work experience. We will engage in a participatory design process with online IT workers to build and publicly launch our new digital resumé platform for workers to use when applying to jobs on online work platforms.

Name

Role

School

Department

Michael Bernstein

PI

School of Engineering

Computer Science

Ramesh Johari

Co-PI

School of Engineering

Management Science and Engineering

Margaret Levi

Co-PI

School of Humanities and Sciences

Political Science

Back to top of page      

Self-Supervised Representation Learning Of Screen Data To Facilitate Behavioral And Reinforcement Learning Models

The past decade has seen the dramatic success of deep convolutional neural networks for computer vision. Classifiers trained on large-scale image and video datasets (e.g., ImageNet) provide both at or above human-level performance and the generic visual representations that enable transfer learning to other visual classification tasks. These innovations expand our ability to obtain rich, versatile visual representations of naturalistic image data and to parse the semantics of the human visual world. Increasingly, however, much of human visual experience is mediated through computer, tablet, and smartphone screens. To better understand, model, and meaningfully contribute to this important segment of human experience --- life lived on and through screens --- we need rich and versatile visual representations of the digital environments encountered in everyday lives. With those representations, we can map how humans navigate, learn from, and contribute to digital life. We propose to use newly available data and analyses grounded in media psychology to develop visual representations that facilitate semantic labeling of human screen-based behavior and support development of new computational models of human learning and curiosity.

Name

Role

School

Department

Nick Haber

PI

Graduate School of Education

Graduate School of Education

Nilam Ram 

Co-PI

School of Humanities and Sciences

Communication and Psychology

Byron Reeves

Co-PI

School of Humanities and Sciences

Communication

Thomas Robinson

Co-PI

School of Medicine

Child Health

Back to top of page      

Stanford Center For AI Safety Postdoctoral Fellowship

Modern AI systems have seen success in a wide variety of tasks such as image recognition, natural language processing and game playing, which has enabled their use in safety critical settings such as transportation and healthcare. Unfortunately, analyses of these systems have demonstrated the ways in which AI systems can be unsafe, including sensitivity to small changes in their input or reliance on protected features such as race and gender to make decisions. The HAI seed grant will be used to fund a postdoc position at the Stanford Center for AI Safety, which is dedicated to the understanding and mitigation of risks associated with AI systems. The center sits at the intersection between industry applications, technical AI research, and government regulation. The recipient of the fellowship will develop a strategic research vision that addresses large-scale AI safety challenges by connecting their work with other researchers at the center, at Stanford more broadly, and in industry. We expect this fellowship to jump-start a tradition of postdoctoral fellowships at the center, which will ultimately help address the biggest challenges in AI safety.

Name

Role

School

Department

Clark Barrett

PI

School of Engineering

Computer Science

Mykel Kochenderfer

Co-PI

School of Engineering

Aeronautics and Astronautics

Dorsa Sadigh

Co-PI

School of Engineering

Computer Science/Electrical Engineering

Back to top of page      

Toward Machine Models of Structured Human Reasoning through Explanation-Based Meta-Learning

Some of the most impressive achievements of the human mind – including the contributions of mathematicians, physicists, and computer scientists – have led to deep scientific understanding and powerful technologies. We seek to combine human and machine learning research to build toward artificially intelligent systems that could potentially achieve these abilities. Our goals are guided by the tenet that scientific problem solving skills rely on a combination of intuition and systematic reasoning, and that learning such problem solving skills occurs through both explicit instruction and explanatory discourse. Unlike typical machine learning systems that learn from explicit instruction (i.e. input-output pairs), our initial research will focus on building an explanation-based meta-learning framework that allows machines to quickly learn new formal reasoning skills with a combination of both instruction and explanation. If successful, the proposed initial research could both allow us to further understand human cognition, as well as help address the underspecification problem in machine learning research.

Name

Role

School

Department

Chelsea Finn

PI

School of Engineering

Computer Science and Electrical Engineering

Jay McClelland

Co-PI

School of Humanities and Sciences

Psychology

Back to top of page      

Ultra-Fast MRI For PrecisionRadiotherapy Using Physics-Aware Deep Learning

Magnetic resonance imaging (MRI)-integrated radiotherapy has the potential to improve the treatment of cancer patients through the delivery of precise, high-dose radiation.  Imaging guidance provided by MRI during radiation treatment can facilitate delivering higher doses to cancer cells that would not be possible with traditional techniques due to the risk of toxicity to surrounding organs. The specific need of supporting daily adaptive treatment planning and live decision-making during radiotherapy, however, diverges significantly from diagnostic MRI and imposes more stringent limits on the imaging time. The goal of this work is to explore the wealthy individualized priors from radiotherapy patients, who receive multiple imaging and treatment sessions throughout the radiotherapy course and combine such priors with domain-specific knowledge of MRI physics in a deep learning framework, to allow dramatic subsampling during image acquisition while still supporting adaptive radiotherapy with sufficient anatomical and biological information. Specifically, we aim at constructing deep learning models to address two challenges in the current MRI-guided radiotherapy practice: 1) real time 3D tumor tracking during radiotherapy delivery, where model-based 3D MRI will be obtained within sub-seconds and used to update tumor position in real time for accurate radiation beam placement; 2) daily quantitative imaging for treatment response monitoring, where a series of model-based MR images will be obtained with a clinically acceptable scan time and used to support biological parameter mapping on a daily basis, from which biomarkers that are predictive of treatment response can be extracted and used to guide radiation dose adjustment.

Name

Role

School

Department

Lucas Vitzthum

PI

School of Medicine

Radiation Oncology - Radiation Therapy

Daniel Chang

Co-PI

School of Medicine

Radiation Oncology - Radiation Therapy

Lianli Liu

Co-PI

School of Medicine

Radiation Oncology - Radiation Physics

John Pauly

Co-PI

School of Engineering

Electrical Engineering

Lei Xing

Co-PI

School of Medicine

Radiation Oncology - Radiation Physics

Back to top of page      

Understanding Real-World Human Behaviors via Embodied, Interactive Virtual Reality

Humans engage in diverse daily activities in complex environments. Most research in cognitive science, however, has focused on simple, isolated tasks in well-controlled laboratory settings. Thus, despite the advances in understanding human thinking and reasoning, these findings are not always easily generalizable to actual human behaviors outside the lab. Such disconnect also has limited the direct relevance of cognitive science research on the development of AI, particularly where translating cognition to behaviors is critical (e.g., robotics, embodied AI). In this project, we aim to address these issues by developing an embodied, interactive virtual reality (EI-VR) platform, accompanied by a curriculum for virtual embodiment learning, to study real-world human cognition and behavior. The new platform will allow us to achieve both experimental control and ecological validity; such studies, in turn, can provide valuable data for driving new advances in AI.

Name

Role

School

Department

Jiajun Wu

PI

School of Engineering

Computer Science

Hyowon Gweon

Co-PI

School of Humanities and Sciences

Psychology

Nick Haber

Co-PI

Graduate School of Education

Graduate School of Education

Back to top of page      

Using Wearable Electrodermal Activity (EDA) Sensors To Identify ADHD Biotypes

Attention-Deficit/Hyperactivity Disorder (ADHD) is a highly prevalent neurodevelopmental disorder. Despite prior efforts towards validation of objective diagnostics for ADHD, clinical assessments centered on rating scales remain the primary diagnostic method. Prior work has shown that the functioning of the Autonomic Nervous System (ANS) is atypical in many people with ADHD. In this study, we examine the use of longitudinal measurement of Electrodermal Activity (EDA) —a peripheral index of autonomic arousal— as a biomarker to 1) aid in the objective diagnosis of ADHD  and 2) augment the diagnosis of ADHD by identifying potential subtypes, or biotypes. We plan to conduct a 10 day study to understand the EDA characteristics in individuals with a rating scale diagnosis of ADHD, using a wearable sensor complemented with other behavioral data such as medication usage and physical activity. The study will yield a rich dataset for computational analysis to investigate the robustness of EDA features in characterizing ADHD and its biotypes and to detect atypical EDA activity.

Name

Role

School

Department

Leanne Williams

PI

School of Medicine

Psychiatry and Behavioral Sciences

James Landay

Co-PI

School of Engineering

Computer Science

John Leikauf

Co-PI

School of Medicine

Psychiatry and Behavioral Sciences - Child and Adolescent Psychiatry

Back to top of page      

What Conception of the “Human” Grounds Human-Centered Artificial Intelligence? A Cultural Framework for Equitable Development of Artificial Intelligence Across the Globe

"Human-centeredness" is a deceptively simple idea. Humans within a society and across the globe differ in their values, beliefs, and norms. To be human-centered is to deeply understand the representation of the tendencies and needs of people from different cultural backgrounds. Nonetheless, current Artificial Intelligence (AI) development reflects and encodes a powerful universalistic fallacy—that humans in all contexts think and feel and act in similar ways. Dominant visions about AI development in the US have been based on cultural views of a distinctive sample that is disproportionately white, male, and US-centered. The lack of diverse representations of cultural views is likely to result in biased technological development. Designers in different societies lack proper tools to build empathy for and learn from AI development in other cultures. The lack of common understanding of cultural influences on AI development and implications can cause confusion and tension among the public, and hinder cross-cultural collaboration on AI development and ethics. In this proposal, we seek to substantiate the concept of human-centered artificial intelligence by developing a conceptual framework based on cultural views on the environment and on the self to inform the design of AI. As a first step, we will compare cultures in the US and East Asia (China and Japan in particular) in relation to the design of Ambient Intelligence (AmI). We will conduct culture cycle analysis to identify and distill prominent cultural factors that are most relevant to the design of AI at various levels. We will generate futuristic scenarios about Ambient Intelligence (AmI) in accordance with people's worldviews and self-construals, based on which we will conduct empirical studies. Ultimately, we hope to enable equitable, culturally resonant technological development and forge collaborations on AI development and ethics across the globe.

Name

Role

School

Department

Hazel Markus

PI

School of Humanities and Sciences

Psychology

Brian Lowery

Co-PI

Graduate School of Business

Graduate School of Business

Back to top of page