Skip to main content Skip to secondary navigation
Page Content

Seeking the Next Generation of Intelligent Machines

At HAI’s upcoming fall conference, scholars from AI, neuroscience, and psychology will share new research and examine the intersection of their worlds.

Image
Brain xrays

Nomadsoul1/Getty Images

HAI's fall conference will focus on new research at the intersection of how humans learn, how machines learn, and how they can learn from each other.

The relationship between human intelligence and artificial intelligence is tightly wound – for decades, AI has been influenced and inspired by the fields of neuroscience and psychology. One example: Neural networks, the complex computing systems that today might estimate depth for autonomous cars, determine derivative securities pricing, or forecast product sales, were modeled after the neural networks in the human brain.

Deepening this interconnected relationship is the focus of the Stanford Institute for Human-Centered Artificial Intelligence’s fall virtual conference, Triangulating Intelligence: Melding Neuroscience, Psychology, and AI. The Oct. 7 gathering is “about the latest research at the intersection of how humans learn, how machines learn, and how we can learn from each other,” says HAI associate director and Stanford applied physics associate professor Surya Ganguli.

Ganguli, along with HAI associate director and Stanford Artificial Intelligence Laboratory director Chris Manning, will host the day-long conference, which brings together scholars from all three disciplines to share new research and discuss a path forward for the next generation of intelligent machines.

Here, Ganguli and Manning explain who should attend and what to expect:

What is this conference’s goal?

Manning: At the moment, humans are the one highly intelligent machine, but we’re increasingly developing computers that have aspects of intelligence. We believe both sides will be able to benefit from these discussions. We’ll get further ideas, just like the development of neural networks, to improve how we build artificial intelligence, but equally on the human side, you often need to understand mechanisms to understand how things really work, and building models is a classic way to understand mechanisms.

Ganguli: We want people to think about how we can go beyond current practices, where we’re just training large neural networks on large amounts of data. We know these things are not that robust, they have all sorts of issues. We don’t know if making networks bigger and following existing practices is like building a tree to get to the moon, if it’s a complete dead end or will we need new ideas. So, we would like to explore new ideas and be inspired by neuroscience and cognitive science to see what the next major ideas in AI would be.

How are neuroscience, psychology, and AI already intertwined today?

Manning: Totally dominating artificial intelligence at the moment is deep learning or artificial neural networks. The basic ideas of those networks arose, and were explored, largely in cognitive science. It was neuroscience ideas that initially inspired the nature of neural networks, and it was cognitive scientists that actively revived this exploration in the 1980s and laid the seeds for what is now the big technological progress, now that our computers have started to catch up.

Ganguli: Another example I have talked about is reinforcement learning – how do systems learn from reward? It turns out animals and machines seem to learn in very similar ways from reward, and machine learning actually helped out neuroscience in trying to decipher how the reward system of the brain works. So, there’s a lot of interaction between those two things.

What will the day’s discussions include?

Ganguli: We’ll talk about everything from robotics to natural language to curiosity-driven play. Dan Yamins, one of our speakers, is trying to imbue artificial agents with a notion of curiosity, so that they’ll play and learn about the world much more quickly through efficient exploration, driven by curiosity and novelty seeking. Then Aude Oliva takes a cognitive science approach to AI, particularly problems in vision, while Sanjeev Arora is at the forefront of using a theoretical lens to understand machine learning.

Manning: This will be a great opportunity to hear from some exciting minds and a range of perspectives. It’ll be a chance to hear Matt Botvinick, who leads the neuroscience efforts at DeepMind, the research institute now owned by Google. They’re doing a huge amount of fundamental research in neuroscience and connecting it to AI. For cognitive science and psychology, Joshua Tenenbaum at MIT is presenting his ideas about how we build the next generation of artificial intelligence. Chelsea Finn is an exciting young scholar who is really pioneering the exploration of both deep learning techniques applied for reinforcement learning and a particular technique of meta learning, which is how you get computers not just to learn to do one thing like recognize objects, but be in the state like a young human being where they can learn to do all sorts of different things depending on what they’re exposed to. And Yejin Choi is really leading the push for how we can get computers to show some of the common sense that human beings have, at least most human beings have on a good day, and that computers have historically lacked.

What’s HAI’s role in this conversation?

Manning: In the formation of HAI, HAI made several big bets focused on different ways to link people and artificial intelligence. This is the argument: To really push a next generation of artificial intelligence much closer to the reasoning, common sense, and flexibility of human intelligence, we must look again to inspiration from biological intelligence. For various reasons over the last two decades, artificial and machine intelligence became much more separated from cognitive science than it was in the 1970s, ’80s, and ’90s, and we believe that pushing them back closer together and having more interaction will really be an important part of bringing in the next generation of more human-like artificial intelligence. We’re wanting to show and inspire that in the context of this workshop.

Who should attend this conference?

Ganguli: Anyone who’s interested in doing a deep dive into the nature of our intelligence and how we can recreate it, from undergrad and grad students and professors, but also people working in industry or just interested in developing AI systems.

Manning: This will be slightly more technical than HAI’s more recent conferences, so it’s for anyone who wants to geek out about and be inspired by AI and neuroscience and psychology.

What are each of you both personally most excited about?

Ganguli: I’m really excited about the rapid-fire mixture of ideas from different fields in the panel discussions with the speakers and really trying to think about: Where are we headed? What are we thinking about going forward? How do we break out of the current Kool-Aid of deep learning that seems to work really well in industry but may not be the final answer to general intelligence?

Manning: One area that I’m personally close to and excited about is how to make more progress on common-sense intelligence. Artificial intelligence took a real hard left in the 1990s, when so much of the emphasis moved to pattern recognition tasks of how to recognize objects in an image or a video, how to recognize words and speech, and I think it’s high time that we start to get back to higher levels of more deliberate cognition and start to refocus on planning and reasoning.

Learn more or register for the event here.

Stanford HAI’s mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more