Skip to main content Skip to secondary navigation
Page Content

Modeling How People Make Causal Judgments

By providing quantitative predictions of how people think about causation, Stanford researchers offer a bridge between psychology and artificial intelligence.

Image
Billiard balls mildly scattered in a pool table

Stefan Kleine Wolter

If self-driving cars and other AI systems are going to behave responsibly in the world, they will need a keen understanding of how their actions affect others. And for that, researchers turn to the field of psychology. But often, psychological research is more qualitative than quantitative, and isn’t readily translatable into computer models.

Some psychology researchers are interested in bridging that gap. “If we can provide a more quantitative characterization of a theory of human behavior and instantiate that in a computer program, that might make it a little bit easier for a computer scientist to incorporate it into an AI system,” says Tobias Gerstenberg, assistant professor of psychology in the Stanford School of Humanities and Sciences and a Stanford HAI faculty member. 

Read the full paper: A Counterfactual Simulation Model of Causal Judgments for Physical Events

 

Recently, Gerstenberg and his colleagues Noah Goodman, Stanford associate professor of psychology and of computer science; David Lagnado, professor of psychology at University College London; and Joshua Tenenbaum, professor of cognitive science and computation at MIT, developed a computational model of how humans judge causation in dynamic physical situations (in this case, simulations of billiard balls colliding with one another).

“Unlike existing approaches that postulate about causal relationships, I wanted to better understand how people make causal judgments in the first place,” Gerstenberg says.

Although the model was tested only in the physical domain, the researchers believe it can be applied more generally, and may prove particularly helpful to AI applications, including in robotics, where AI struggles to exhibit common sense or to collaborate with humans intuitively and appropriately.

Video file

A simulation of billiards balls to model causation

The Counterfactual Simulation Model of Causation

On the screen, a simulated billiard ball B enters from the right, headed straight for an open gate in the opposite wall – but there is a brick blocking its path. Ball A then enters from the upper right corner and collides with ball B, sending it angling down to bounce off the bottom wall and back up through the gate.

Did ball A cause ball B to go through the gate? Absolutely yes, we would say: It’s quite clear that without ball A, ball B would have run into the brick rather than go through the gate. 

Now imagine the same exact ball movements but with no brick in ball B’s path. Did ball A cause ball B to go through the gate in this case? Not really, most humans would say, since ball B would have gone through the gate anyway. 

These scenarios are two of many that Gerstenberg and his colleagues ran through a computer model that predicts how a human evaluates causation. Specifically, the model theorizes that people judge causation by comparing what actually happened with what would have happened in relevant counterfactual situations. Indeed, as the billiards example above demonstrates, our sense of causation differs when the counterfactuals are different – even when the actual events are unchanged.

In their recent paper, Gerstenberg and his colleagues lay out their counterfactual simulation model, which quantitatively evaluates the extent to which various aspects of causation influence our judgments. In particular, we care not only about whether something causes an event to occur but also how it does so and whether it is alone sufficient to cause the event all by itself. And, the researchers found that a computational model that considers these different aspects of causation is best able to explain how humans actually judge causation in multiple scenarios. 

The research is the first to make quantitative predictions about people’s causal judgments for physical events.

Counterfactual Causal Judgment and AI 

Gerstenberg is already working with several Stanford collaborators on a project to bring the counterfactual simulation model of causation into the AI arena. For the project, which has seed funding from HAI and is dubbed “the science and engineering of explanation” (or SEE), Gerstenberg is working with computer scientists Jiajun Wu and Percy Liang as well as Humanities and Sciences faculty members Thomas Icard, assistant professor of philosophy, and Hyowon Gweon, associate professor of psychology.

One goal of the project is to develop AI systems that understand causal explanations the way humans do. So, for example, could an AI system that uses the counterfactual simulation model of causation review a YouTube video of a soccer game and pick out the key events that were causally relevant to the final outcome – not just when goals were made, but also counterfactuals such as near misses? “We can’t do that yet, but at least in principle, the kind of analysis that we propose should be applicable to these sorts of situations,” Gerstenberg says. 

The SEE project is also using natural language processing to develop a more refined linguistic understanding of how humans think about causation. The existing model only uses the word “cause,” but in fact we use many different words to express causation in different situations, Gerstenberg says. For example, in the case of euthanasia, we would say that a person helped or enabled a person to die by removing life support rather than say they killed them. Or if a soccer goalie blocks numerous goals, we might say they contributed to their team’s win but not that they caused the victory.

“The assumption is that when we communicate with one another, the words that we use matter, and to the extent that these words have certain causal connotations, they will bring a different mental model to mind,” Gerstenberg says. Using NLP, the research team hopes to develop a computational system that generates more natural sounding explanations for causal events.  

Related: Software Turns ‘Mental Handwriting’ into On-screen Words

 

Ultimately, the reason this all matters is that we want AI systems to both work well with humans and exhibit better common sense, Gerstenberg says. “In order for AIs such as robots to be useful to us, they ought to understand us and maybe operate with a similar model of causality that humans have.”

Causation and Deep Learning

Gerstenberg’s causal model could also help with another growing focus area for machine learning: interpretability. Too often, certain types of AI systems, in particular deep learning, make predictions without being able to explain themselves. In many situations, this can prove problematic. Indeed, some would say that humans are owed an explanation when AIs make decisions that affect their lives.

“Having a good causal model of the world or of whatever domain you’re interested in is very closely tied to interpretability and accountability,” Gerstenberg notes. “And, at the moment, most deep learning models do not incorporate any kind of causal model.”  

Developing AI systems that understand causality the way humans do will be challenging, Gerstenberg notes: “It’s tricky because if they learn the wrong causal model of the world, strange counterfactuals will follow.”

But one of the best indicators that you understand something is the ability to engineer it, Gerstenberg notes. If he and his colleagues can develop AIs that share humans’ understanding of causality, it will mean we’ve gained a greater understanding of humans, which is ultimately what excites him as a scientist.

“Who’s not interested in the question why?” he says. “And for that, causality is the key concept.”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics

Related Content