Skip to main content Skip to secondary navigation
Page Content
As robots move out of isolated settings into everyday environments, they need to do more than react to people, says Yuhang Che, who received his PhD in mechanical engineering at Stanford in 2019 and is now a software engineer in motion planning at Waymo. Robots need to communicate in ways that yield predictable human responses. 
 
For example, if two people are headed straight for one another in a hallway, they know how to navigate the situation:  One person shifts to the side and the other responds, moving the opposite way. But what happens if you replace one of the people with a robot? Can it be trained to anticipate how a human will react when crossing its path? And can it learn to communicate its plans with body language or other means of expression?  
 
With seed grant support from the Stanford Institute for Human-Centered Artificial Intelligence (HAI), Che used AI to train a robot to learn from data about people’s behavior in path-crossing situations. He then showed that the trained robot could communicate both body-language type cues and explicit messages to cross a human’s path efficiently. The explicit messages were delivered using a handheld haptic device that relies on the sense of touch to convey information. Also, in separate experiments, Che showed that AI can be personalized to help a specific person perform a task more efficiently.  
 
The importance of building computational models of how humans act and react to robots is often overlooked, says Dorsa Sadigh, assistant professor of computer science and electrical engineering at Stanford and one of Che’s advisors. People might think “humans will figure out how to work around the robot,” she says, but actually, if robots and haptic devices are trained to take human behavior into account--as Che’s do--they will do a better job of helping humans accomplish tasks. 
 
Che’s work also aligns with a larger goal of understanding how people respond to autonomy. “There isn’t necessarily a one size fits all solution,” says Allison Okamura, professor of mechanical engineering at Stanford University and principal investigator for Stanford’s Collaborative Haptics and Robotics in Medicine (CHARM) Lab who was also Che’s advisor. “Instead of forcing people to adapt to a particular piece of technology, tech should adapt to people,” she says.
 

Cross Paths with Robots

 
Passing one another in a hallway can be challenging even for people. If both dodge in the same direction, a little dance ensues. The jig only ends when one person communicates a clear intention either with body language or actual words. 
 
To train a robot to avoid the hallway dance, Che first set up a scenario where an AI system could learn how humans respond to a robot’s behavior. During the learning process, the robot had a choice of two possible behaviors as it repeatedly crossed paths with a human: either always yield or totally ignore the human. The interactions were recorded by an overhead camera. “The AI system learns how humans move around when the robot is present,” Che explained. 
 
Che then set up an experiment to see if the trained robot could use the AI model to predict the human’s response, and then make decisions to ensure certain goals were met. For example, could the robot get from point A to point B as fast as possible without a collision, and ensure that the human did the same. “These two goals might be aligned or might not,” Che says. “The robot is balancing these different factors when it makes decisions.”
 
“This is not a rule-based system,” Sadigh says. Nothing is hand-coded to tell the robot how to act in a given scenario. Instead, the robot uses the AI model of human behavior to make optimal decisions about how to communicate with the person crossing its path. Its options included both implicit and explicit communication strategies. For implicit communication, the robot would merely pause or not pause in its trajectory to signal its intent to let the person go first or not -- a signal akin to body language. “It’s a combination of timing and speed to successfully communicate intent,” Che says. For explicit communication, Che used a human-held haptic device that would vibrate once or twice when the robot wanted to indicate either “I’m going first,” or “you go first.” Che also tested the two strategies combined. He then measured the amount of time it took the robot and the human to complete their tasks of going from one location to another without a collision.
 
The result: “By combining implicit and explicit communication and modeling human behavior, the robot makes its behavior easier for the human to understand and improves the efficiency of both the human and the robot in accomplishing their tasks,” Che says. Robots trained to take human behavior into account will be valuable on a factory floor, or in providing assistance to people in a home or hospital, Sadigh says. “Anywhere that robots need to navigate around people efficiently.”
 

A Tug Toward Personalized AI

 
Che also experimented with using haptic devices to provide directional cues. A small handheld gadget applied a slight tug on the inside of a person’s finger as he or she was walking.“It feels like it’s both stretching your skin and pushing on your finger a little bit,” Che says. “It’s purpose is to tell you a direction.” 
 
The person holding the haptic device was instructed to turn and walk in the direction of the tug. As he or she moved along a path, the device might deliver new tugs to correct the heading. 
 
To train an AI model with human responses to the haptic device, Che put ten people through their paces 120 times each, recording their movements via a laser scanner as the device tugged them in a variety of directions in discrete 15 degree increments. He also ran the same experiment using verbal instructions given through headphones. 
 
Interestingly, responses to the verbal cues were slower than responses to the haptic device, and users said the verbal cues were more mentally demanding as well. 
 
“In general haptic feedback is less disturbing: You feel a tug and you follow it, versus something constantly talking to you and you’d have to focus all of your attention on it,” Che says. “For all of these projects the reason we explore haptics is because a lot of the other sensory channels are already occupied while you are walking on the street.” 
 
But it’s also true that it is hard to precisely follow the haptic instructions. “Some people are more sensitive to the device than others, and follow the instructions well,” Che says. Others find that more challenging. But the beauty of Che’s experiment was that he could train an AI model for each person. So, for example, if a person tended to turn 30 degrees when instructed to turn 45 degrees, the model would learn to adjust the instructions for that individual. 
 
After the AI model had been trained with individual people’s responses to the device, Che ran people through their paces again using the trained system. Their actions were most accurate (i.e., they turned in the desired direction) when the device relied on what it knew about specific people’s responses rather than those of an average person.  
 
“We find that people are quite different from each other and you have to use a customized model for each person to get the best performance,” Che says. 
 
His work with the handheld haptic device could be useful in designing guidance devices for visually impaired people. It’s also part of a bigger picture effort to determine how AI techniques can help robots or haptic devices understand and respond to human behavior in an automated fashion. 
 
As more and more autonomous systems come online, including autonomous cars, we will need a greater understanding of how people interact with such systems, Okamura says. She predicts that, at least for a time, autonomous systems will actually be human-robot collaborations. And Che’s work helps lay the groundwork for understanding what that might mean.

More News Topics