Prior Hoffman-Yee Grant Recipients
2021 Hoffman-Yee Grant Recipients
Intelligent Wearable Robotic Devices for Augmenting Human Locomotion
Falling injuries among the elderly cost the U.S. health system $50 billion (2015) — while causing immeasurable suffering and loss of independence. This research team seeks to develop wearable robotic devices for the elderly, people with sports injuries or neurodegenerative diseases, and others using an AI system that both aids in human locomotion as well as predicts and prevents falls.
- Principal Investigator: Karen Liu, Associate Professor of Computer Science
- Faculty, postdoctoral scholars, and graduate students from Mechanical Engineering, Bioengineering, Orthopedic Surgery, and Medicine
AI Tutors To Help Prepare Students for the 21st Century Workforce
The project aims to demonstrate a path to effective, inspiring education that is accessible and scalable. The team will create new AI systems that model and support learners as they work through open-ended activities like writing, drawing, working on a science lab, or coding. The research will monitor learners’ motivation, identity, and competency to improve student learning. Tested solutions will be implemented in code.org, brick-and-mortar schools, virtual science labs, and beyond.
- Principal Investigator: Christopher Piech, Assistant Professor of Computer Science Education.
- Faculty and postdoctoral scholars from Education, Psychology, and Computer Science
Curious, Self-Aware AI Agents To Build Cognitive Models and Understand Developmental Disorders
Human children learn about their world and other people as they explore. This project will bring together tools from AI and cognitive sciences, creating playful, socially interactive artificial agents. In the process, the team hopes to gain insights into building robots that can handle new environments and interact naturally in social settings.
- Principal Investigator: Daniel Yamins, Assistant Professor of Psychology and Computer Science.
- Faculty, postdoctoral scholars, and graduate students affiliated with Psychology, Graduate School of Education, Computer Science, School of Medicine
An AI “Time Machine” for Investigating the History of Concepts
- Principal Investigator: Dan Jurafsky, Professor of Humanities, Linguistics, and Computer Science
- Faculty from English and Digital Humanities, Philosophy, Economics, French, Political Science, History of Science, Sociology, Psychology, and Biomedical Data Science
2020 Hoffman-Yee Grant Recipients
Intelligent Wearable Robotic Devices for Augmenting Human Locomotion
Having the ability to navigate around furniture, negotiate stairs and corridors, get on and off chairs or beds, and get in and out of vehicles is essential for an individual to carry out day-to-day tasks and maintain independence. The goal of this project is to create intelligent wearable robotic devices that enable individuals with physical impairments to complete daily tasks by augmenting their capability to move in constrained spaces such as home environments. Our approach is to create an intelligent human agent which in turn trains a wearable device to be intelligent. Simply put, we use AI to teach AI. We expect the intelligent wearable devices to understand human behaviors and intentions in the context of their environments, and make decisions to complement human movement patterns and capabilities. The success of this project will expand the impact of AI to augmenting human physical skills and mobility, improving independence and quality of life for older adults and people with physical impairments.
NAME | ROLE | SCHOOL | DEPARTMENTS |
---|---|---|---|
Karen Liu | PI | Engineering | Computer Science |
Steve Collins | Co-PI | Engineering | Mechanical Engineering |
Scott Delp | Co-PI | Engineering | Bioengineering, Mechanical Engineering, Orthopaedic Surgery |
Leo Guibas | Co-PI | Engineering | Computer Science, Electrical Engineering (courtesy) |
VJ Periyakoil | Co-PI | School of Medicine | Primary Care |
AI Tutors to Help Prepare Students for the 21st Century Workforce
We aim to build an intelligent learning platform that improves how it teaches as it learns more about each student and students overall. Currently, most personalized tutors simply move the student forward or backward among lessons. No systems leverage the enormous range of techniques and strategies to inspire and enable effective learning: our course might immerse students in discovery learning, orchestrate tutoring, employ teachable agents or offer discursive opportunities depending on what it learns. Using all interactions with each student -- answers, partial work, explorations, side comments -- our course will adjust the learning experience, so students receive customized content and types of instruction. To get there, our team will engage with central questions of AI algorithms and theory. We will build and test new educational tools, targeting online programming and data science learning spaces, to: automate and widen assessments of what students know and can do, facilitate effective and inclusive online peer interactions, enhance instructor efficacy, construct narratives that engage and include students, and automatically identify which activity best supports desired learning outcomes. These threads will blend together to create a joyful new learning experience when students take “the smartest course in the world.”
NAME | ROLE | SCHOOL | DEPARTMENTS |
---|---|---|---|
Chris Piech | PI | Engineering | Computer Science |
Emma Brunskill | Co-PI | Engineering | Computer Science |
Noah Goodman | Co-PI | Humanities and Sciences, Engineering | Psychology, Computer Science, Linguistics (courtesy) |
James Landay | Co-PI | Engineering | Computer Science |
Jennifer Langer-Osuna | Co-PI | Graduate School of Education | |
Dan Schwartz | Co-PI | Graduate School of Education |
Toward Grounded, Adaptive Communication Agents
Social interactions of all kinds require continual adaptation. Consider how the language and behavior of a medical patient and their caregiver will evolve over the course of their relationship. Initially, the patient will need to rely on detailed instructions like "Please get me the eprosartan mesylate from the downstairs bathroom. I will get some water myself". As the two adapt to each other, such descriptions will simplify to "Time for my meds" -- with a sense that implies coordinated action that is specialized to their relationship. This joint process of grounding (finding the right bottle) and adaptation (what "meds" means) is fast, largely unconscious, and essential for success.
To date, work on artificial agents has largely set these processes aside. However, as we ask these agents to interact with us in more open-ended ways, their lack of physical and social grounding is leading increasingly to poor task performance; an assistive robot acting as an in-home caregiver would be a liability if it failed to adapt to its patient and context. We are addressing these problems via interwoven efforts in characterizing the cognitive and social dynamics of the phenomena and training robots and interactive virtual agents. Our core objectives are to facilitate the development of next-generation intelligent agents and to understand the broader societal effects such technologies will have.
NAME | ROLE | SCHOOL | DEPARTMENTS |
---|---|---|---|
Christopher Potts | PI | Humanities and Sciences | Linguistics, Computer Science (Courtesy) |
Judith Degen | Co-PI | Humanities and Sciences | Linguistics |
Mike Frank | Co-PI | Humanities and Sciences | Psychology, Linguistics (courtesy) |
Noah Goodman | Co-PI | Humanities and Sciences, Engineering | Psychology, Computer Science, Linguistics (courtesy) |
Thomas Icard | Co-PI | Humanities and sciences | Philosophy, Computer Science (courtesy) |
Dorsa Sadigh | Co-PI | Engineering | Computer Science, Electrical Engineering |
Mariano-Florentino Cuéllar | Participating Faculty | Law |
Curious, Self-aware AI Agents to Build Cognitive Models and Understand Developmental Disorders
Truly intelligent autonomous agents must be able to discover useful behaviors in complex environments without having humans available to continually pre-specify tasks and rewards. This ability is beyond that of today's most advanced autonomous robots. In contrast, human infants naturally exhibit a wide range of interesting, apparently spontaneous, visuomotor behaviors - including navigating their environment, seeking out and attending to novel objects, and engaging physically with these objects in novel and surprising ways. In short, young children are excellent at playing - ``scientists in the crib'' who create, intentionally, events that are new, informative, and exciting to them. Aside from being fun, play behaviors are an active learning process, driving the self-supervised learning of representations underlying sensory judgments, motor planning capacities, and social interaction.
But how exactly do such young children know how to play, and how can we formalize and harness the human play capacity to substantially improve the flexibility and interactivity of autonomous artificial agents? Here, we propose using deep neural network software agents, endowed with a mathematically formalized sense of ``curiosity'' and sophisticated perceptual capacities, to naturally generate playful behavior in novel environments. Combining cognitive science ideas with deep reinforcement learning, we seek to make a substantial leap in the fundamentals of AI. From a cognitive science and clinical perspective, we will use these improved AI systems to build better quantitative models of development and improve our understanding of developmental disorders.
NAME | ROLE | SCHOOL | DEPARTMENTS |
---|---|---|---|
Dan Yamins | PI | Humanities and Sciences, Engineering | Psychology, Computer Science |
Mike Frank | Co-PI | Humanities and Sciences | Psychology, Linguistics (courtesy) |
Nick Haber | Co-PI | Education | Computer Science (courtesy) |
Fei-Fei Li | Co-PI | Engineering | Computer Science |
Dennis P. Wall | Co-PI | Medicine | Pediatrics, Psychiatry (courtesy), Biomedical Data Sciences |
Reinventing Government with AI: Modern Tax Administration
Collecting revenue is a core function of government. Taxes fund nearly all public programs, from health care to environmental protection to military defense. The Internal Revenue Service (IRS) relies critically on taxpayer audits to detect under-payment and to encourage honest income reporting, but the process faces considerable challenges.
The annual tax gap – the difference between taxes owed and paid – is nearing $500 billion. This shortfall starves the government of needed resources, while contributing to growing wealth inequality. The IRS has faced shrinking enforcement resources and a dwindling capacity to audit taxpayers for evasion or fraud, with a 42% drop in the audit rate from 2010 to 2017. Some analysts have suggested that audits excessively focus on lower-income taxpayers, and a more complete analysis and approach to these distributive concerns is important.
In partnership with the IRS, our team is using AI and new active learning methods to help modernize our country’s system of tax collection and risk prediction. Our work seeks to develop a fair, effective, and explainable AI system for identifying tax evasion and addressing the human-centered challenges of integrating AI in a complex bureaucracy with ~10,000 diverse revenue agents, 150M taxpayers, and 1M audits annually.
NAME | ROLE | SCHOOL | DEPARTMENTS |
---|---|---|---|
Jacob Goldin | PI | Law |
Law |
Daniel Ho | Co-PI | Law | Political Science |
Guido Imbens | Co-PI | Business | Economics |
Anne Joseph O'Connell | Co-PI | Law | Law |
Jure Leskovec | Co-PI | Engineering | Computer Science |
Rebecca Lester | Co-PI | Business | Business |
An AI “Time Machine” to Explore the Social Lives of Concepts
Humans understand the world via concepts, models that express our conception of ourselves and our society. We develop new AI technology to help humanists and social scientists trace how concepts develop and change over time and how concepts differ between groups. Our goal is to build a new kind of microscope for studying the dynamics of concept change and culture, using natural language processing and drawing on online texts from different languages, genres, and time periods, to answer deep humanistic and social science questions. Our work also enriches our understanding of how AI systems like neural network language models represent concepts themselves. Our multidisciplinary team from the humanities, social sciences, computational sciences, and the library ask questions like: How are complex concepts represented (including textual and visual elements) in modern neural networks? How do concepts become moralized over time? How do our conceptions of immigration and immigrants change with successive waves of immigrants? How do conceptions of gender and race vary across time and geography, and what are the implications for sociology and the history of science? What are the legal implications (for example for legal originalism) of conceptual and word meaning varying between groups or over time? This project applies AI to research into the social and historical dimensions of human thought, opening up new perspectives on both human and machine intelligence.
NAME | ROLE | SCHOOL | DEPARTMENTS |
---|---|---|---|
Ran Abramitzky | Co-PI | Humanities and Sciences | Economics |
Mark Algee-Hewitt | Co-PI | Humanities and Sciences | English |
R. Lanier Anderson | Co-PI | Humanities and Sciences | Philosophy |
Dan Edelstein | Co-PI | Humanities and Sciences | French & Italian, History (courtesy) |
Julian Nyarko | Co-PI | Stanford School of Law | Law |
Dan Jurafsky | Co-PI | Humanities and Sciences, Engineering | Linguistics, Computer Science |
Alison McQueen | Co-PI | Humanities and Sciences | Political Science |
Londa Schiebinger | Co-PI | Humanities and Sciences | History |
Rob Willer | Co-PI | Humanities and Sciences, Business (by courtesy) | Sociology, Psychology (by courtesy) |
Jamil Zaki | Co-PI | Humanities and Sciences | Psychology |
James Zou | Co-PI | School of Medicine | Biomedical Data Science, Computer Science (by courtesy), Electrical Engineering (by courtesy) |
Catherine Coleman | Senior Personnel |
|