Over the past half-century, neuroscientists have made extraordinary strides in understanding the human brain by inserting wires into the brains of animals like cats, rats, and monkeys and characterizing how their neurons fire. Though such experiments can’t be done as easily in people, looking at neurons in animals has nevertheless helped scientists understand the underpinnings of phenomena such as optical illusions, memory, and drug addiction.
But animal brains have their limitations. Some sophisticated human behaviors, like mathematical reasoning, are beyond the reach of animals – and even if animals can be trained to use numbers, it’s unclear whether they learn about them in the same way humans do, given that they don’t have the same capacity for language. So when Vinod Menon, Stanford professor of psychiatry and behavioral sciences and member of Stanford HAI and the Wu Tsai Neurosciences Institute, and Percy Mistry, a research scholar in Menon’s lab, wanted to try to understand how children learn about numbers, they didn’t look to biology. Instead, they decided to try to approximate the process of human number learning using a deep neural network.
Deep neural networks were originally modeled after the brain, and they have been widely used to probe the inner workings of the visual system. So by training a brain-like network to recognize numbers, Menon and Mistry were able to gather evidence about number learning in humans that would have been impossible to obtain otherwise. Their results, published in Nature Communications, suggest that an innate number sense may not be as important as other researchers have proposed.
Because there are limitations to neurophysiological experiments that can be conducted ethically in humans, this type of research could prove essential to understanding the human brain’s complex capabilities, Menon says. “It’s hard to make inroads into understanding the neural mechanisms of complex human cognitive processes without building models like this.”
Testing ‘Spontaneous Number Neurons’
In a previous study, researchers trained a deep neural network to recognize images and discovered, to their surprise, that some neurons in the network were sensitive to numbers – they responded especially strongly to pictures of a particular number of objects, despite never having been trained to identify the number of objects in an image. These results seemed to lend credence to the idea that numerosity is, in some sense, innate: that children may have a sense for numbers without being explicitly taught about them, and that future learning could depend on that sense.
But no one had actually tested whether those “spontaneous number neurons” help with number learning. To do so, one would have to first take a neural network trained to recognize objects, identify its number-sensitive neurons, retrain that network to report the number of objects in an image, and then see whether those neurons help the network learn that task – which is precisely what Mistry, Menon, and their colleagues did.
They found that the spontaneous number neurons didn’t help with learning at all. Most neurons that started out number-sensitive either lost that number sensitivity over the course of training or became sensitive to a different number. And the neurons that did stay responsive to the same numbers didn’t seem to be doing anything particularly essential: Removing them from the network during the learning process didn’t have any effect on the network’s final performance.
Bridging AI and Human Intelligence
While this study was entirely performed on computers, there’s reason to think it could have something to say about how human brains work. The team intentionally started with an object-recognition network that had previously been demonstrated to resemble parts of the monkey visual system – and after training, number-sensitive neurons in the neural network behaved like number-sensitive neurons in the monkey brain.
Without invasive human experiments, it’s impossible to make the same comparisons between the network and the human brain. But the team found other ways to attack the problem. Looking at the pool of number-sensitive neurons as a whole, they found that the network used two different strategies for telling different numbers apart. One strategy used a linear number line, where the endpoints – 1 and 9 – were easy to distinguish, but numbers in the middle – 4, 5, and 6 – were harder to tell apart. The second strategy, however, was based around the midpoint of the number line – so 4, 5, and 6 were perceived as very different from each other. This same pattern is seen in humans as they learn: As they develop their number sense, children start out sensitive to low and high numbers, but over time they start using the midpoint of the number line as a reference point as well. “It was exciting to observe the emergence of number line representations similar to those seen in children, even though we did not explicitly train the neural network to do so,” Menon said.
It would be premature to conclude, however, that human children learn in exactly the same way that this neural network does. The model is ultimately “a very, very simple approximation of what the brain is doing, even with all its complexity,” Mistry says. That simplicity makes the network easier to study and train, but it also limits how much it can tell us about human biology.
Nevertheless, the model does an impressive enough job of approximating the number learning process in children that Mistry and Menon have high hopes for its future. Menon has spent years studying dyscalculia, a disability that affects numerical and mathematical skills. The team’s goal now is to use the network to study potential neural mechanisms for dyscalculia, by implementing those mechanisms in the network and seeing how they interfere with number learning.
“We can make hypotheses about different mechanisms that might be possible causes and evaluate which might be relevant. We can even look at possible interventions,” Mistry says. “We can use this model as a sandbox.”
The study, “Learning-induced reorganization of number neurons and emergence of numerical representations in a biologically inspired neural network,” published in Nature Communications this June. Other Stanford contributors include postdoctoral fellows Anthony Strock and Ruizhe Liu, and coterm student Griffin Young.
Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.