Skip to main content Skip to secondary navigation
Page Content
Image
A boy and a girl in an elementary school classroom are using electronic tablets as they sit at their desks.

A new program can predict when a student will start "wheel-spinning" in a lesson and recommend a solution. 

Computer-based, self-paced learning systems have the potential to expand educational opportunities for untold millions of students, especially in places where human teachers are scarce.

But despite scores of new learning platforms, such as Brain Pop or Education Galaxy, current programs can’t offer any advice when a student gets bogged down and starts “wheel-spinning.”

A human teacher can often figure out the problem and offer a solution, such as having the student go back to an earlier lesson. But computers aren’t equipped to sort through all the possible reasons a person has become stuck. Did the student just guess their way through prerequisite lessons, without actually mastering them? Does the student need help with the technology? Or was the activity inadvertently designed in a way that dooms the student to failure?

Two Stanford researchers, working with a nonprofit that supplies tablet computers to children in crisis zones, have tested a machine-learning program that not only predicts when a student is likely to start spinning wheels but also recommends a solution.

“People get stuck, which can be so frustrating, and they need outside help,” said Tong Mu, a graduate student in electrical engineering at Stanford who worked on the project. “A human teacher sitting next to the student can often figure out the right intervention, but teachers are spread too thin in many places and today’s systems don’t really offer that kind of help.” 

Read the paper: Towards Suggesting Actionable Interventions for Wheel-Spinning Students

 

Mu and Emma Brunskill, an associate professor of computer science at Stanford University and a Stanford Institute for Human-Centered AI affiliate, teamed up with Andrea Jetten, an educator with War Child Holland, to test their program. War Child Holland sends tablets and learning software to elementary schools in conflict regions from Sudan and Chad to Bangladesh.

To train their AI model, Mu and Brunskill had it analyze performance data from 1,170 Ugandan school children who had used the tablets to learn English reading skills through videos and mini-games. 

About half the students spun their wheels at least once, which the researchers defined as making at least 10 attempts to answer a question. Wheel-spinning indicates that the student is floundering and guessing, and it can be a good predictor of additional problems down the road. Because the average Ugandan class has 114 students, however, teachers had only limited ability to offer one-on-one help.

The new model was able to predict whether a child would fall into wheel-spinning, to some extent even before the child had begun a new lesson. Among the clues it uses are the number of attempts a student made to deal with prerequisite problems and how long it has been since the student last encountered the subject. The more a person spins wheels in earlier lessons, for example, the more likely they are to do so in subsequent ones.

Testing Against Humans

The real question, however, was whether the system could diagnose the nature of the problem and recommend the right intervention. 

The researchers used a popular tool in interpretable machine learning, Shapley values, to provide credit attribution to features of the model that help predict wheel-spinning. Some of these features suggest immediate remediation actions, like if a student is struggling in a prerequisite skill for the current topic. The researchers compared their system’s recommendations — such as having a student go back to an earlier lesson or getting tips about the game’s mechanics — to those made by a human expert. 

They tested the tool on six hypothetical students built out of the historical performance data. In four out of the six cases, the model and the human expert came up with the same recommendations. In the two cases where they disagreed, it was for reasons outside the system’s control. In one case, the system had been given the wrong information about the necessary prerequisites. In the other, the wheel-spinning stemmed from problems with how the activity was designed. 

Brunskill says the model could have broad educational applications for schoolrooms in impoverished nations but also in affluent ones. It could also be used to improve workplace training and continuing education for adults. 

The goal isn’t to have machine-learning models replace human teachers, she says, but to have them collaborate by providing recommendations when a student struggles. The more that a computer can diagnose significant student problems, the easier it will be for a limited number of human teachers to help large numbers of students.

“I think of this effort as humans and AI working together to support a student and provide quality education,” Brunskill says. “We’re thinking about creating ecosystems to support learning that involve teachers, parents, and AI tutors. AI has limitations, as do humans, but AI has the potential to identify when a system isn’t working well, and the reasons for that.”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics

Related Content