What Can AI Learn from Human Intelligence?
National Institute of Mental Health, National Institutes of Health
Can we teach robots to generalize their learning? How can algorithms become more commonsensical? Can a child’s learning style influence AI?
Stanford Institute for Human-Centered Artificial Intelligence’s fall conference considered those and other questions to understand how to mutually improve and better understand artificial and human intelligence. The event featured the theme of “triangulating intelligence” among the fields of AI, neuroscience, and psychology to develop research and applications for large-scale impact.
HAI faculty associate directors Christopher Manning, a Stanford professor of machine learning, linguistics, and computer science, and Surya Ganguli, a Stanford associate professor of neurobiology, served as hosts and panel moderators for the conference, which was co-sponsored by Stanford’s Wu-Tsai Neurosciences Institute, Department of Psychology, and Symbolic Systems program.
Speakers described cutting-edge approaches—some established, some new—to create a two-way flow of insights between research on human and machine-based intelligence, for powerful application. Here are some of their key takeaways.
The Power of Deep Reinforcement
Matthew Botvinick, director of neuroscience research for DeepMind, provided a broad overview of the AI company’s research-driven advancement of AI applications using deep reinforcement learning (training using rewards) and other neuroscience/psychology concepts.
In 2015, for example, DeepMind trained machines to play classic Atari games at superhuman levels, then extended this approach to more complicated games like StarCraft and Go and, most recently, to multi-agent games like Capture the Flag.
That has led to groundbreaking ideas and practices using concepts from everything from developmental psychology to animal behavior. For example, DeepMind is currently training AI neural networks using cutting-edge understanding of dopamine-based reinforcement learning in humans. “We’re helping AI systems make better predictions based on what we’ve learned about the brain,” Botvinick says.
For instance, the team found that the brain understands potential rewards as existing on a distribution—rather than just “reward” or “no reward”—which helps us make decisions about actions. AI systems can be trained to use similar decision-making approaches, inspired by that insight.
A Two-Way Modeling Approach
“We can use AI systems to understand the brain and cognition better, and vice versa,” says Dan Yamins, Stanford assistant professor of psychology and computer science.
One way his team has done that is by modeling the human visual system using AI, then comparing optimized models with actual brain functioning for tasks like face recognition. Broadly, the research uses four principles—architecture class, task, dataset, and learning rule—for such modeling to think about visual, auditory, and motor systems. The approach has helped generate insights, for example, about how infants use “unlabeled” visual data to learn object representations (using SAYCam data co-generated by Stanford researchers).
Similarly, the team moves in the other direction, from cognitive science to AI, where observations of infant learning have led to the use of 3-D graph embedding to model intuitive physics and other processes in AI. Now, Yamins is working on embodying curiosity into AI systems, based largely on how babies interact with their environments.
Improving Generalization with General Training
Despite this progress, broadening findings from the AI lab to real-world applications can be challenging, as pointed out by Stanford assistant professor of computer science and electrical engineering Chelsea Finn, who studies intelligence through robotic interaction. “Robots often learn to use only a specific object in a specific environment,” she says.
Her team is helping AI applications learn to generalize as humans do, by providing robots broader, more diverse experiences. For example, they found offering robots visual demonstrations resulted in faster, more generalized learning related to tasks such as placing objects in drawers or using tools in established and new ways. “A little human guidance goes a long way,” Finn says.
In general, exposure to broader data leads to better generalization. Now Finn’s team is co-developing the RoboNet database to share learning-related videos—15 million frames and counting—across institutions to help robots “learn to learn.” She makes her work and teaching widely available.
Toward Scalable Commonsense Intelligence
“Commonsense intelligence” reflects an ongoing gap between human and machine understanding, one that multiple speakers and their teams are trying to fill.
“We need to model how human intelligence really works,” University of Washington associate professor of computer science and engineering Yejin Choi says.
For example, AI systems struggle to handle unfamiliar, “out-of-domain” examples and lack our intuition for understanding the “whys” of visual elements, as illustrated by Roger Shepard’s “Monsters in a Tunnel” exercise (we see the scene depicted as a chase; an AI system may not).
To help machines develop commonsense intelligence, Choi’s team created the Visual Comet system using natural-language descriptions for 60,000 images (car crash, crying out for help, etc.). The goal is to enable models to move from language to knowledge, “to reason about everyday life,” as Choi says. In testing, the system has helped promote AI-based understanding of scenarios like why someone would write a controversial tweet, and what happens before and after.
“We are teaching machines concepts more directly,” Choi says, “rather than through multiple-choice datasets.”
MIT-IBM Watson AI Lab co-director Aude Oliva is also working toward a commonsense-related objective, bringing cognitive science into AI models. “There’s a lot of ‘gold’ in basic neuroscience knowledge to apply to AI models,” Oliva says.
Her lab’s “Moments in Time” project, for example, uses a large dataset of three-second videos to help neural networks learn visual representations of activities such as eating, singing, and chasing, along with potential associations among visual images. The consequent models can understand abstract themes such as competition and exercise, for instance, including as part of “zero-shot” learning (related to a very limited number of examples).
To better understand how humans learn and then apply this to AI models, Oliva’s team is using MEG (magnetic) and fMRI (blood-flow) brain imaging. In aggregate, the data illuminate exactly what regions of the brain activate when, to process visual, auditory, and other inputs, providing clues for how to build smarter, more dynamic AI systems. “We are learning the many common principles between human and AI cognition,” Oliva says.
Oliva’s MIT colleague, professor of computational cognitive science Joshua Tenenbaum, seeks to scale AI learning and impact using human-inspired models. “What if we could build intelligence that grows as it does in babies, into more mature versions?” he asks.
His teams are reverse-engineering core common sense using developmental psychology-inspired concepts, such as the “child as scientist or coder,” harnessing probabilistic programs to build AI systems with human-like architecture. “We want to simulate the ‘game engine’ in your head,” Tenenbaum says, describing the fast processing human brains use.
They have uncovered the location of the brain’s physics engine, along with creating a neural network that can better mimic the human visual system. The research informs the development of more flexible, scalable AI platforms capable of unprecedented inference and action, such as DreamCoder, a system that can create highly complex drawings.
Learning While Protecting Privacy
Still, one of the challenges of deep learning relates to data privacy. “Today’s Faustian bargain,” says Sanjeev Arora, a Princeton professor of computer science, “is that we hand over our data to enjoy a world fully customized for us”—whether related to retail, health care, or work.
He studies how to help deep learning learn without revealing individual-level data. Here, established strategies, like differential privacy and encryption, sacrifice accuracy and efficiency, respectively.
InstaHide, the system Arora has co-developed, encrypts images for AI model training/testing, while enabling high accuracy and efficiency. Specifically, the system mixes private images with public ones and changes pixel color randomly. A similar model applies the idea to text-based data, by encrypting text ingredients and gradients.
“The systems have close to 100 percent accuracy and can help with data privacy for everything from medicine to self-driving cars,” Arora says.
Triangulating Intelligence at Stanford
Many Stanford speakers noted that triangulating intelligence is a priority across university departments. Stanford professor of human biology and director of the Symbolic Systems Program Michael Frank and Bill Newsome, professor of neurobiology and director of the Wu Tsai Neurosciences Institute, described how their organizations, along with HAI, have launched programs at this intersection.
Stanford undergraduates now have an option to take a new human-centered AI concentration in the undergraduate Symbolic Systems Program, with classes spanning digital ethics, policy and politics of algorithms, and AI design.
“Symbolic Systems is a unique undergraduate program offering an interdisciplinary education in computation, philosophy, and cognitive science,” Frank says. The program, which started in 1986 and boasts well-known alumni including the founders of LinkedIn and Instagram, features an introductory course called Minds and Machines.
Stanford’s Wu Tsai Neurosciences Institute launched nearly a decade ago to promote a campus-wide community related to neuroscience. “The brain is too big a problem to be solved by any one discipline or set of experimental techniques,” Newsome says. To this end, Wu Tsai invests in faculty, interdisciplinary fellowships, and research, among other activities.
Other Stanford-housed initiatives like HAI’s Hoffman-Yee grant program will continue to bring these programs and other interdisciplinary researchers together to create a broad ecosystem driving valuable insights and application at the intersection of AI, neuroscience, and psychology.
Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.