Grant OpportunitiesFor More Information
2019 HAI Seed Grant Call for Proposals
HAI is pleased to announce its second seed grant call for proposals. We will award up to 25 grants of up to $75,000 each. This call for proposals aims to support innovative and interdisciplinary seed research in Human-Centered Artificial Intelligence. Proposals should be for support of new, ambitious, and speculative ideas with the objective of getting initial results and should be distinct from existing sponsored research. We encourage proposals that involve collaborations of faculty and students across different fields, with a preference for supporting AI-related research bridging two or more Departments and/or Schools and for the advancement of a human-centered focus. The application deadline is December 15.
2018 HAI Seed Grant Recipients
In 2018, 25 proposals for innovative research were funded. The winning proposals are highly collaborative, interdisciplinary, and work to further the development, application and study of in human-centered artificial intelligence and related issues.
Adversarial Examples for Humans?
Gregory Valiant and Noah Goodman
Over the past five years, there has been a building interest in understanding why nearly all state-of-the-art machine learning systems are vulnerable to “adversarial examples”. Specifically, for nearly every datapoint on which the model would be expected to perform well (including points in the training set), one can carefully compute an extremely small perturbation—so small as to be imperceptible to the human eye—yet which results in the classification model outputting an incorrect label. In this project, we will explore the extent to which humans are susceptible to analogous types of “adversarial examples”. Humans are generally regarded as the gold-standard for robust perception and classification, and the implicit assumption in much of the work on “adversarial examples” is that humans do not have such vulnerabilities. We hope to understand the extent to which there exist natural settings (e.g. pertaining to vision, speech recognition, perception, etc.) with the property that most inputs can be altered by a small amount resulting in humans changing the way they would classify or characterize the input. This is not a search for isolated instances (e.g optical illusions), but an attempt to understand whether there are setting where nearly every natural input can be turned into an “illusion”. We hope that this work will yield significant insights into aspects of human perception (as analogous work on machine learning system has provided new perspectives learning algorithms), and the result may have implications for security and safety.
Automated Moderation of Small Group Deliberation
Ashish Goel and James Fishkin
While the Internet has revolutionized many aspects of human life, including commerce, advertising, social interactions, and education, it has not yet proved to be a force for good in large scale deliberation – in fact, open chat-groups and message boards often devolve into name calling and non-productive conversation when discussing substantive issues. This project seeks to develop a moderator bot that can mediate small group deliberative conversations, keeping them civil, engaged, and on point. We plan to carefully combine algorithmic elements with design elements that encourage the group to collectively moderate itself. One of our goals is to develop this platform to the point where it can be used to scale the design features of online Deliberative Polling® to large populations assigned to diverse small groups.
“Always On” Genetic Patient Diagnosis
Gill Bejerano and Jon Bernstein
Correcting Gender and Ethnic Biases in AI Algorithms
James Zou, Londa Schiebinger, Serena Yeung and Carlos Bustamante
Machine learning algorithms can contain gender and ethnic biases. Biases arise from a variety of sources, ranging from discrepancies in the training data to unconscious or conscious choices in the algorithm design. As AI becomes increasingly ubiquitous in everyday lives, such bias, if uncorrected, can lead to inequities in service and even systematic discrimination against specific populations. Modeling, understanding and correcting harmful human biases in AI is thus an essential step in developing algorithms that are broadly beneficial for humanity. In this project, we will develop a systematic framework of AI auditing where we leverage machine learning to discover and correct its own biases. In collaboration with social scientists, humanists and domain experts, we will apply AI audit to machine learning algorithms in biomedical, text and computer vision applications. Our goal is to make AI audit an integral component of the standard machine learning pipeline in industry and academia.
Dynamic Artificial Intelligence-Therapy for Autism on Google Glass
Dennis Wall, Tom Robinson and Terry Winograd
Children with autism (ASD) universally struggle to recognize facial expressions, make eye contact, and engage in social interactions. Although autism has no cure, many children can have dramatic recoveries if social skills are taught from an early age. While early detection is vital to initiate therapy that often has life changing impacts, the delivery of such support itself represents a considerable bottleneck for families. There is a sharp and increasing imbalance between the number of children requiring care and the availability of specialists suited to treat the disorder in its multi-faceted manifestations. The autism community faces a dual clinical challenge; how to direct scarce specialist resources to service the diverse array of phenomes and how to monitor and validate best practices in treatment. Clinicians must now look to solutions that scale in a decentralized fashion, placing data capture, remote monitoring, and therapy increasingly into the hands of the parents and patients. There is potential to meet this need through wearable tools and use of the massive amounts of emotion-labeled facial image and social video data available in the public arena, constituting “big data”, that so far have had virtually no impact on healthcare. Tapping into this potential, we have prototyped an Artificial Intelligence (AI) tool for automatic facial expression recognition that runs on Google Glass through an Android app to deliver social emotion cues to children with autism while interacting with family members in their natural environment. Our pilot tool leverages Glass’s outward facing camera to read a person’s facial expressions and passes facial landmarks to an Android native app for immediate machine learning-based emotion classification. The system is designed as a home behavioral therapy tool to teach children with autism how to interpret emotion in faces, improve overall social awareness, and increase eye contact during social interactions. We designed the prototype to be an ephemeral learning aid, not a prosthetic, that is worn during 20-minute, parent-led social engagement sessions a few times per week over six weeks. Through a dedicated app, caregivers can then review and discuss auto-curated videos of social interaction captured throughout the day. Our Autism Glass system leverages wearable and machine-learning technologies to provide a scalable mobile therapy platform that enables behavioral interventions and data gathering remotely from families’ homes while supplementing traditional therapies by enabling caregivers to deliver care that we posit can generalize learned skills into everyday life. We hypothesize that our system’s ability to provide continuous behavioral therapy during natural social interactions will enable dramatically faster gains in social acuity that, within a limited and self-directed period of use, will permit the child to engage in increasingly more complex social scenarios on his/her own and contribute to improvements in eye contact, facial affect recognition, and overall social awareness. Leveraging our HAI seed grant, we will refine the system’s efficacy and ready it for widespread deployment.
Enabling Natural-Language Interactions in Educational Software
Alex Kolchinski, Sherry Ruan, Dan Schwartz and Emma Brunskill
One on one tutoring has long been held to be an effective practice in education: in a number of studies, tutors have been shown to raise students’ performance levels by a standard deviation or more. Software tutors show promise in expanding access to tutoring but have shortcomings relative to human tutors, among which a major one is the ability to target useful and frequent responses to students. While humans are able to do this with the help of natural-language interaction, tutoring software generally depends on lower-density cues like multiple-choice answers to target feedback. Bridging this gap depends on developing more powerful mechanisms to detect misconceptions in student explanations of their reasoning. However, training machine learning models to do so depends on having a large enough labeled dataset, which does not yet exist. We propose to collect and label such a dataset to stimulate research in detecting student misconceptions on an academic task and to implement baseline models for community reference.
Fast, Multiphase Human-in-the-loop Optimization of Exoskeleton Assistance
Steven Collins and Emma Brunskill
Exoskeletons and active prostheses could restore mobility to people with neuromuscular impairments, but must first overcome challenges posed by our complex, unique and continually changing bodies. A promising new approach is human-in-the-loop optimization, in which an algorithm automatically discovers and customizes assistance patterns for an individual while they use the device (Zhang et al., 2017, Science). In this seed project, we will develop a new algorithm for human-in-the-loop optimization that separately works to teach the person how to use the exoskeleton and optimizes the exoskeleton to better assist the person. We will treat the training phase as a partially observable Markov decision process, in which the person’s expertise is monitored and improved, and the optimization phase as a contextual bandit using Bayesian optimization. This multiphase approach is expected to result in more effective training, faster optimization and improved overall locomotor performance. Our long-term goal is to develop intelligent exoskeletons and prostheses that continuously adapt to a person throughout their lifetime, supporting whatever locomotion challenges they choose to approach.
Free Exploration in Human-Centered AI Systems
Mohsen Bayati and Ramesh Johari
All systems that learn from their environment must grapple with a fundamental tradeoff between making decisions that maximize current rewards (typically referred to as "exploitation"), and choosing decisions that are likely to teach the system about the environment and thus potentially increase future rewards (referred to as "exploration"). In general, automated machine learning and artificial intelligence systems leverage a number of clever techniques to actively balance exploration and exploitation to maximize rewards over time. This dynamic, however, becomes substantially more complicated at the interface between machine learning systems and humans. For example, consider an algorithmic health care decision support platform offering treatment recommendations for a patient. In this case, "exploration" entails offering an option that--given the current system knowledge--may not be the myopically best choice which may not be considered ethical. In this project, we consider an alternate and yet remarkably simple approach to addressing this hurdle: precisely because these are systems with humans-in-the-loop, there is often a great deal of free exploration available for any machine learning system to leverage. For example, when patients or physicians--the ultimate arbiters of the treatment plan--deviate from making the myopically optimal decision. Our goal is to develop a formal methodology for reasoning about the free exploration provided by humans that interact with machine learning and AI systems.
Gender Bias in Conversations with Chatbots
Katie Roehrick, Jeff Hancock, Byron Reeves, Londa Schiebinger, James Zou, Garrick Fernandez and Debnil Sur
Currently, few human-computer interaction studies have used automated chatbot messaging technology to study gender biases in digital communication. While prior literature suggests that users respond stereotypically to gendered virtual humans, the majority of these studies use virtual characters capable of nonverbal behavior, such as gestures and facial expressions, similar to humans. Yet, chatbots lack most nonverbal communication capabilities; thus users will form impressions of chatbots based on limited cues (e.g., language; physical appearance, if an image is provided; or voice properties, if the bot is voice-synthesized). We thus propose a multi-study research program to examine biases in chatbot communication. For the initial study, we will manipulate both visual representations and language in order to examine how implied gender interacts with domain expertise cues and degree of chatbot responsiveness to affect users’ perceptions of and behavior toward chatbots. Given the current lack of research into this question, we believe that this study can provide much-needed insight into the specific effects of chatbot characteristics on self-disclosure, particularly regarding the impact of limited nonverbal behavior on gender biases during virtual interactions.
Harnessing AI to Answer Questions about Diversity and Creativity
Dan McFarland, Londa Schiebinger and James Zou
When outsiders (women or underrepresented minorities) enter an academic field, do they ask new questions? Does diversity enhance creativity, produce new knowledge, and lead to innovation? Universities and science-policy stakeholders, including the National Science Foundation, National Institutes of Health, the European Commission, Stanford University, and the School of Engineering readily subscribe to this argument. But is there, in fact, a diversity dividend in science and engineering? In this seed grant, we propose developing a proof of concept analyzing how the numbers of women entering a field over the past 70 years has/or has not altered scholarship in that field as well as identifying what those changes are. We propose developing this proof of concept for the field of History—where we hypothesize large effects for the diversity of scientific ideas. Once we have established a methodology, we will test our idea in a subfield of computer science, Natural Language Processing (where it is hypothesized that some fields, such as sentiment analysis, were pioneered by women), and then extend more generally to Computer Science. We hypothesize that links between team diversity and diversity in research questions are discipline specific, and differ by field-specific paradigms, epistemic cultures, and notions of excellence. In all instances, our findings must be corrected for large social and cultural shifts. Our approach combines the strength of both social science and computational text analysis.
The Impact of Artificial Intelligence on Perceptions of Humanhood
Benoît Monin and Erik Santoro
Can logic remain at the core of what it means to be human if AI clearly surpasses humans at it? What about language or complex thought? Will society redefine what is core to the human experience as humans lose ground to AI on cognitive abilities that traditionally enshrined humans at the top of the animal kingdom? Will humans instead focus on features that AI does not currently possess (e.g., personality, desires, morality or even spirituality) to retain a sense of superiority? The purpose of our research is to investigate how learning about and interacting with artificial intelligence change perceptions of what it means to be human -- and how that affects subsequent choices and behavior. Drawing on social psychological theory and using randomized control trial (RCT) experiments, we seek to understand and forecast how the increasing presence of AI in daily life will change perceptions of what it means to be human.
The Impact on Society of Autonomous Mobile Robots: A Pilot Study
Marco Pavone, Mark Duggan and David Grusky
The long-term goal of this project is to predict and influence the impact of autonomous mobile robots (in particular, self-driving cars) on society. For this initial developmental piece of the larger project, our research objectives are to (1) develop a plan for dynamic modeling of the impact of autonomous mobile robots on societal infrastructures, societal relations, and societal controls, (2) begin to build the datasets needed to develop this dynamic model, and (3) perform a small-scale study of the impact of self-driving cars on the built environment in the Bay Area, which will serve as both an inspiration and a test bed for our research. This is a pilot effort aimed at laying the foundations for a Stanford-wide initiative on assessing the impact of autonomous mobile robots on society.
Improving Refugee Integration Through Data-Driven Algorithmic Assignment
Jens Hainmueller, Kirk Bansak, Andrea Dillon, Jeremy Ferwerda, Dominik Hangartner, Duncan Lawrence and Jeremy Weinstein
The Stanford Immigration Policy Lab (IPL) has developed a data-driven algorithm for the optimal geographic assignment of refugees to resettlement locations within a host country. The optimization objective of the algorithm is to maximize refugees’ employment and improve other integration outcomes by leveraging synergies between refugees’ demographic profiles and the characteristics of specific resettlement locations. An initial study describing the algorithm and reporting backtests using historical data has been published in Science. The backtests suggest that our algorithmic assignment can lead to gains of roughly 40-70%, on average, in refugees’ employment outcomes relative to current assignment practices. As a major innovation in an often limited resource space, the algorithm has potential to significantly improve the refugee resettlement process at very little cost. In recognition of this, the Swiss government and several refugee resettlement agencies in the U.S. have committed to a partnership with IPL to undertake full-scale testing and implementation of the algorithm over the next several years. Switzerland, which formally approved a pilot test of the algorithm to commence in fall 2018, will be the first site of implementation. HAI’s generous support will allow the IPL team to lay the groundwork for these multi-country pilots, which will demonstrate the impact of a new AI tool for international policymakers responding to the current refugee crisis.
Learning Behavior Change Interventions At Scale
Michael Bernstein and James Landay
Behavior change systems help people manage their time better, act more sustainably, exercise more, and achieve many other goals. However, expert-designed behavior change systems suffer from high user abandonment rates: there is a mismatch between (1) the large variation in people's motivations, needs, and goals, and (2) the small number of monolithic, one-size-fits-all interventions that exist today. We propose a massive-scale approach to behavior change design: tapping into an online crowd of users to create a large set of highly varied interventions and automatically identify the best one for themselves. We are creating HabitLab, a web browser plug-in and mobile phone application focused on the popular behavior change area of online time management such as time spent on social media. The system supports this process by deploying customizations as interventions and measuring their effectiveness. Our focus is a multi-armed bandit strategy that minimizes user attrition. Specifically, some interventions are more aggressive than others: while these work well for some users, they may drive away the vast majority of others. So, our goal is to develop a multi-armed bandit algorithm whose reward estimates long-term effectiveness as a mix of time saved and attrition rate, so that draconian interventions do not drive away users.
Learning Decision Rules with Complex, Observational Data
Xinkun Nie and Stefan Wager
We study methods for personalized decision making using rich, observational data. This class of questions falls at the interface of epidemiology and econometrics on one hand, in that we seek to uncover causal relationships from observational data, and machine learning on the other, in that we need to work with complex representations. Our technical approach starts from building a data-adaptive objective function that isolates causal signals, and then optimizing this learned objective via methods developed in the AI community; the proposed research will examine the theoretical and practical potential of this approach. Initial results suggest that our approach of learning objectives presents a promising framework for bringing machine learning know-how to bear on problems in causal inference.
Learning Haptic Feedback for Motion Guidanc
Julie Walker, Andrea Zanette, Mykel Kochenderfer and Allison Okamura
Collaboration between humans and robotic systems in physical tasks can be improved with clear communication of intentions between agents and by adapting device controllers to human movements. Haptics is a promising method for providing guidance to users during human-machine interaction, particularly through wearable or ungrounded devices. These systems allow for a larger workspace and more freedom of motion for the user than with grounded, kinesthetic devices, but it is not possible to apply net forces on the user. User responses to these haptic cues can be variable depending on the user, pose, and direction of the guidance. Learning and “human-in-the-loop” optimization has been successful in other human-machine interaction applications. We plan to apply modeling and reinforcement learning to optimize ungrounded and wearable haptic guidance. Using two haptic devices – one hand-held torque feedback device and one fingertip skin-stretch device – we will record users movements in response to motion guidance. We will generate probabilistic models of the users’ motions, and apply model-based learning methods to optimize the haptic guidance provided for each user. We hope these methods will improve the ability of humans and intelligent systems to communicate effectively during tasks such as robotic surgery, teleoperation, and collaborative object manipulation.
Mining the Downstream Effects of Artificial Intelligence on How Clinicians Make Decisions
Ron Li, Jason Ku Wang, Lance Downing, Lisa Shieh, Christopher Sharp and Jonathan Chen
With up to 98,000 deaths in hospitals attributed to preventable medical errors, clinical decision support (CDS) powered by artificial intelligence (AI) is deemed an integral component of the National Academy of Medicine’s vision of a Learning Health Care System to deliver high quality care in an increasingly complex healthcare system. AI research has focused on developing better algorithms to build CDS with improved predictive accuracy, but less attention has been placed on understanding the downstream effects of AI on clinical decision making that could lead to unanticipated consequences on patient outcomes. Detection of unanticipated consequences of CDS by healthcare systems continue to rely on sporadic anecdotes and incident reports, which is limited in scope and carry significant bias. This gap in our ability to systematically evaluate how CDS affects how clinicians think and behave hinders the design and implementation of safe and effective AI for patient care.
We propose to develop a proof of concept for a novel approach using pattern mining to systematically evaluate the downstream effects of CDS on clinical decision making. Our approach will apply itemset and frequent sequence mining to digital trace data in the EHR that are generated in situ by clinicians downstream of being exposed to CDS alerts. We aim to create a proof of concept of a method for generating a comprehensive, low-bias feature pool of downstream clinical decisions made in the real world setting that can be potentially generalized to become a new way to study the effects of AI on how clinicians make decisions.
New Moral Economy in an Age of Artificial Intelligence
Margaret Levi, Justice Mariano-Florentino Cuellar, Roberta Katz, John Markoff and Jane Shaw
As we approach the age of AI, we are still operating within a moral, political, and economic framework developed in the mid twentieth century. This grant enables CASBS to create a network of influential and pioneering academics, technologists, industry leaders, government officials, journalists, and civil society activists who can create a new moral political economy that informs corporate practice, government policy, and social interactions. We will incorporate into the framework what we learn from other aspects of the project: working with industry to create and deploy an ethical design team; and studying the ways the introduction of AI resembles the practices of earlier religious communities. One product will be targeted briefings and recommendations to policymakers, industry leaders, unions, and civil society in the U.S. and abroad on the creation of a moral economy for the AI age.
Novel Approach to Map Seasonal Changes in Infection Risk for Schistosomiasis: a Multi-Scale Integration of Satellite Data and Drone Imagery by Using Artificial Intelligence
Giulio De Leo, Susanne Sokolow, Eric Lambin, Zac YC Liu, Chris Re, I. Jones, R. Grewelle, A. Ratner and A. Lund
Accurately predicting the spatial distribution of freshwater intermediate snail hosts is crucial for the control of schistosomiasis, a debilitating parasitic disease of poverty affecting more than 250 million people worldwide, mostly in sub Saharan Africa. Yet, standard techniques for monitoring intermediate hosts are labor-intensive, time-consuming, and provide information limited to the small areas that are manually sampled. Consequently, in the low-income countries where schistosomiasis control is needed most, large-scale programs to fight this disease generally operate with little understanding of where transmission hotspots are, and what type of intervention will be most effective to reduce transmission. We will develop a novel, predictive system of schistosomiasis risk that uses artificial intelligence to integrate field data on vegetation and snail distribution with high-definition satellite and drone imagery. Specifically, we will train deep learning algorithms to identify snail habitat from drone and satellite imagery and generate seasonal maps of disease risk and hotspots of transmission at a scale that is relevant for the national control program in Senegal, one of the most hyper-endemic region in the world. Risk maps will be then used to develop cost-effective strategies to curb disease transmission using Mass Drug Administration, snail control, and other available tools.
Planning for Multi-Modal Human-Robot Communication
Yuhang Che, Cara Nunez, Allison Okamura and Dorsa Sadigh
Robots are expected to become common in homes, workplaces, and public spaces for purposes of assistance, delivery of materials, and security. Recent developments in artificial intelligence have given robots the capability to navigate in complex environments and perform various tasks. However, many challenges still exist for robots that encounter and interact with humans. In a collaboration between Mechanical Engineering and Computer Science, we propose to develop intelligent algorithms that enable robots to proactively communicate with humans in their surroundings to improve safety, efficiency, and user experience. In this project we will: (1) model human-robot interaction, in particular how communication between a robot and a human changes the reward function that a human optimizes in order to accomplish a task, (2) plan communicative robot actions, including movements of the robot that indicate the robot's intent and explicit haptic (touch) signals sent to a wearable device that can be felt and interpreted by the human, and (3) explore different communication modalities and develop algorithms to automatically generate the most appropriate communicative action. We will conduct experiments with human participants and mobile robots navigating in indoor environments in order to collect data for modeling and test the effectiveness of our algorithms and haptic devices.
Scaling Collection of Labeled Data for Creating AI Systems through Observational Learning
Daniel Rubin, Chris Re, Jared Dunnmon, Alex Ratner and Darvin Yi
Modern machine learning models have achieved impressive empirical successes in medical image modalities, but the application of these techniques in practice is hindered by the massive cost of creating the large labeled training sets required. In this project, we hope to address this scalability challenge by building a system that (a) collects passive or observational data from natural expert actions during their normal daily routine and (b) leverages that observational data to derive expert-sourced heuristics that can be subsequently combined with large amounts of unlabeled data and generative modeling to rapidly create large, “weakly supervised” datasets that could make scalable application of machine learning models in medical imaging feasible. We will perform the following tasks: (1) we will track the human behavior of eye gaze in seeing lesions on images by adapting a commercial eye tracker with our web-based ePAD image viewer developed in Dr. Rubin’s lab, where we can record the (x,y,t) spatiotemporal coordinates of where the expert’s gaze was with respect to the a screen at any time; (2) we will develop methods of supervised learning that integrates gaze tracking, automating the process of assigning coarse-label-based pixelwise detection/segmentation labels to image regions seeded by the gaze coordinates, which will produce tagged regions of interest on the images; and (3) develop weak supervised machine learning methods to refine the coarse segmentations generated by eye gaze, which produce image “labeling functions” required to create weakly supervised datasets and leveraging the Snorkel software package for weak supervised learning developed by Dr. Re’s lab.
Smart Learning Healthcare System for Human Behavior Change
Michelle Guo, Fei-Fei Li and Arnold Milstein
Each year, healthcare associated infections (HAIs) kill approximately 90,000 patients in the US, resulting in more annual deaths than car accidents and incurring nearly 10 billion dollars in costs. Hand hygiene (HH) is the first line of defense in preventing HAIs, but it is difficult to enforce--tracking multiple people 24/7 is infeasible and inefficient. AI, specifically computer vision technology (CVT), has recently shown promise for monitoring HH in hospitals by detecting HH activity with over 99% accuracy. However, like many AI approaches, the system stops at recognition. Although the AI community has created a super-human, fully-automated system to monitor HH quality, the challenge remains in developing an AI system that can change clinician behavior via active, real-time feedback without being prone to hospital alert fatigue. We also must pay special care in how we integrate AI into clinicians’ workflows, given the safety-critical and fast-paced healthcare environment. To solve this problem, we propose a smart learning healthcare system (LHS) that “learns” by continually encouraging good clinician behavior while also capturing and analyzing results of the system’s feedback for improvement. In this system, the AI algorithm processes hospital footage to detect non-compliant HH behavior and signals an intervention, reminding the clinician to practice good hygiene. The end goal is to encourage and enforce better HH practices in hospitals, ultimately increasing HH compliance rates (workflow outcome) and reducing HAI rates (patient outcome).
Using AI to Facilitate Citizen Participation in Democratic Policy Deliberations
Deger Turan, Frank Fukuyama, Jerry Kaplan, Larry Diamond, Eileen Donahoe and Chris Potts
Today’s communication ecosystem enables citizen participation in public discourse at an
unprecedented scale. Elected officials, their staff and civil servants spend significant
time and resources navigating public feedback received through multiple channels,
including surveys and social media. While a lower threshold of entry has increased the
volume of discourse, this has not led to a more effective understanding of public opinion
and informed decision making for governance.
AI techniques developed in academia and industry are often employed for unstructured text datasets after the fact, by trained analysts, and are rarely integrated into the decision making process. In this project, we will build an easy-to-use interface where constituent’s comments can be aggregated and explored visually as it is being collected, without expert supervision. More technically, we are building a convolution of topic modeling, sentiment/disposition analysis, and metadata clustering in order to map comments related by subject, language, and context. We are also interested in developing more accessible presentations of opinion spectrums by finding the most significant axes of variance, capturing the breadth of the content and how it shifts over time.
Such representations of opinion landscapes enables governments, advocacy groups, and citizens to better understand crowd feedback, filter out noise and bot automation, empower decision makers, and enhance democratic participation.
Using Computer Vision to Measure Neighborhood Variables Affecting Health
Jackelyn Huang and Nikhil Naik
Neighborhood environments play a significant role in shaping the health of individuals and communities, consequently contributing to inequality in the U.S. Past research suggests that the presence of physical disorder, poorly maintained properties, and vacant lots in neighborhoods can negatively affect physical and mental health, attract more crime and disorder, and lead to neighborhood disinvestment. Few studies, however, examine this process because collecting data on the physical conditions of neighborhoods, especially across neighborhoods and cities and over time, requires extensive time and labor costs. Using advances in computer science, this project will develop an automated method to systematically observe and record the physical conditions of neighborhood environments at a large-scale. The resulting measures will provide a powerful new resource for understanding inequality in the U.S. for the scientific research community and would also help policymakers, practitioners, and the public track neighborhood progress and target improvements.
Using Deep Learning for Imaging Alzheimer’s Disease with Simultaneous Ultra-low-dose PET/MRI
Greg Zaharchuck, Bill Dally, John Pauly and Elizabeth Mormino
Alzheimer’s Disease (AD) is a devastating neurodegenerative disorder and a major public health crisis, currently affecting over 5 million Americans. Positron emission tomography (PET) imaging can identify the hallmark pathologies of AD, including amyloid plaque buildup in the brain, which can precede the onset of frank dementia by 10-20 year. Amyloid PET is essential for patient diagnosis as well as to identify non-demented patients at high risk of conversion to dementia for enrollment in clinical trials of promising AD pharmaceuticals. However, the radiation dose given to the whole body from the radiotracer, its high cost, and the limited numbers of subjects that can be scanned per day limits widespread use of PET. This means that it is not considered as a potential screening modality for younger, non-demented patients, including those who have genetic factors (such as the APOE-E4 mutation) indicating high risk. With the advent of AI-based methodologies such as convolutional neural networks and simultaneously-acquired multimodal magnetic resonance imaging (MRI) and PET scanning, we hypothesize that we can generate high-SNR, diagnostic PET images from PET/MRI scan protocols with ultra-low injected radiotracer dose and/or reduced imaging time. This deep learning project will benefit many populations, including AD patients, who require early or frequent PET follow-ups and the knowledge obtained will form the basis for large, multicenter trials to acquire simultaneous PET/MR imaging at radiation doses 100 times lower than currently used, similar to that accrued during cross-country air travel, ultimately speeding up the timeline to cure AD.