Robots and humans milled about Stanford Campus for HAI's spring conference, Robotics in a Human-Centered World.
Scholars zeroed in on the need for data, generalization, and better human experience.
Robots can now successfully fold shirts, but we’re far from a truly useful home robot.
That’s how hype was humbled at the recent Stanford HAI conference, “Robotics in a Human-Centered World: Innovations and Implication,” which drew experts in robotics, artificial intelligence, and human-robot interaction. Considering the current hype around robotics’ sky-high potential, speakers offered a more grounded view while at the same time identifying promising opportunities in the field.
“We’re finally at a stage where we could think about advances in AI, advances in large language models, foundation models, and how they can influence physical robotic systems that make actions, and this is due to advances in hardware,” said Dorsa Sadigh, Stanford Associate Professor of Computer Science, who organized the conference with Computer Science Professor Karen Liu and Allison Okamura, the Richard W. Weiland Professor in the School of Engineering.

Christine Baker
Dorsa Sadigh, Stanford Associate Professor of Computer Science, was one of the conference organizers.
Throughout the day’s keynotes, panel talks, lightning research rounds, and demonstrations from the Stanford Robotics Center, key insights emerged: Robotic foundation models hold potential, and there is a critical need for more data to make this approach successful. Additionally, truly useful robots must learn how to generalize not only across tasks but also in varied environments and embodiments. Most important, the human element cannot be overlooked—user experience is crucial for fostering widespread adoption.
“Unlike purely digital AI, robots interact physically with people, making user experience, safety, and ethical considerations impossible to ignore,” noted Stanford HAI Denning Co-Director James Landay. “I encourage you to not just think about what AI-powered robots can do, but how they should be designed to enhance human potential, align with our values, and distribute their benefits to society.”

Christine Baker
Rodney Brooks, emeritus professor at MIT and co-founder of Robust AI: Beware the AI hype cycle.
Deliver on Promises, Be Skeptical
Rodney Brooks, an emeritus professor at MIT and co-founder of Robust AI, highlighted his own three laws of robotics during his keynote address. First, a robot’s visual appearance must accurately reflect its capabilities or users will reject them. This is particularly challenging for humanoid robots, which give the appearance they can perform any task a human can. Second, robots must not undermine human agency: Users will reject robots that intervene unnecessarily or fail to show respect. Finally, Brooks emphasized that it will take at least a decade of consistent improvement beyond lab demonstrations for these technologies to mature enough for reliable, affordable use.
Brooks warned against the historical cycles of hype and disillusionment that have plagued AI development, using the analogy of children chasing a soccer ball to illustrate how new trends captivate attention without delivering real progress. He pointed out that self-driving cars have existed since 1979, yet we still lack widely available, functional autonomous vehicles.
Instead of chasing fleeting trends, Brooks urged researchers to focus on practical, long-term objectives.

Christine Baker
Maja Matarić, a distinguished professor at USC and principal scientist at Google DeepMind: Solve real problems for people.
Bodies Matter
How do we create robots that people will accept? “When you give people machines that improve quality of life, if you solve someone’s problem, they will find motivation to accept the technology,” said keynote speaker Maja Matarić, a distinguished professor at the University of Southern California and principal scientist at Google DeepMind.
Central to acceptance is embodiment—people engage more successfully with physical agents. Matarić referenced a study on anxiety comparing an LLM chatbot to an embodied version and found that people who interacted with the embodied version reported significantly decreased distress relative to the chatbot group.
Additionally, personalization and adaptive technologies are crucial for user adoption. Matarić highlighted the importance of individual preference in robot voices, personalities, and even background narratives.
Too Many Humanoids?
“We are in the age of humanoid theater,” Brooks said—and these humanoids don’t deliver on human-like performance. In a panel on frontier research, Cornell Tech Associate Professor Wendy Ju said roboticists shouldn’t limit themselves to one form: “There’s no reason why the robot should be constrained in our imagination. My research team builds robotic ottomans, robotic tables and chairs, and robotic trash barrels. The human focus detracts from the whole field and the space they could have by learning from a wider pool of robots.”
Startups: New Funding Models, Reducing Friction
Panelists noted a significant shift in funding patterns within the startup ecosystem during an entrepreneurial discussion. Previously, startups focused on developing minimum viable products and iterated toward success. OpenAI, however, has taken a different approach, according to Pieter Abbeel, co-founder of Covariant and a former OpenAI employee. “There’s a whole parallel funding system that’s pioneered by OpenAI, which is, we’re not building anything for anyone anytime soon. We’re going to build something so capable that once we’re there, it'll commercialize itself,” he said.
Panelists also insisted that decreasing deployment friction is key to adoption. Charlie Kemp, co-founder of Hello Robot, noted that his assistive robots are designed to be smaller than the average human, making them more compatible with everyday home environments. Andrea Thomaz, CEO of Diligent Robotics, highlighted that her hospital robot assistants can navigate any space that complies with ADA standards.
“We don’t ask them to do anything that they wouldn’t do to make their environment better for people,” said Thomaz. “Still, the biggest competitor to commercializing robotics is the status quo—what people are doing without robots at all. You have to build something better.”

Christine Baker
Panelists detailed the societal impact of robots, from taking on dangerous jobs to consolidating wealth and changing the nature of war.
Broad Societal Impact
The final panel of the day examined the societal impacts of emerging robotic technologies and emphasized the responsibility of developers to minimize potential harm.
Steffi Paepcke, a robotics UX senior manager at Toyota Research Institute, said that robots could transform the workforce by taking on unpleasant tasks. “Robots will play more the role of a tool that people can teach to do the tasks that they would prefer not to do, so they can focus more on the activities that really require a human touch,” she noted.
While these advancements present opportunities, they also raise concerns. Stanford Associate Professor of Mechanical Engineering Steven Collins pointed out that robotics could reduce the number of humans in dangerous jobs, but they may also concentrate wealth and resources, potentially harming labor communities.
Amy Zegart, HAI associate director and the Morris Arnold and Nona Jean Cox Senior Fellow at the Hoover Institution, discussed the profound implications of robotics in warfare. In the ongoing Ukraine-Russia conflict, an estimated 1 million people have died, with drones responsible for about 70% of those deaths, she said. Zegart argued that while these technologies could lower defense costs and deter conflict, society must grapple with ethical questions regarding their use, especially in authoritarian versus democratic contexts. She also flagged concerns about potential cyber vulnerabilities.
Londa Schiebinger, the John L. Hinds Professor of the History of Science at Stanford, cautioned that the design of robots will influence societal stereotypes. She posed critical questions for developers: In nursing, where 90% of professionals are women, could a female robot nurse make patients more compliant? Conversely, would a male robot encourage more men to enter the field? “If we build stereotypes and social inequalities into our hardware, we could amplify those inequalities into the future,” Schiebinger warned.
New Research in Foundation Models, Training, Benchmarks, and More
In highlight talks throughout the day, scholars shared the cutting-edge robotics work taking place in their labs. These included:
Stanford Assistant Professor of Computer Science Jeannette Bohg, noting the scarcity of training data, has proposed a novel data collection method that uses videos from YouTube to train a robot policy.
Karol Hausman, the co-founder of Physical Intelligence (Pi), was able to train his company’s robots to fold shirts five times faster with two stages of training: pretraining on about 10,000 hours of data and post-training on a specific task with roughly 20 hours of data.
HAI Denning Co-Director Fei-Fei Li touched on several projects emerging from her lab, including BEHAVIOR, a benchmark for everyday household activities in virtual, interactive environments; and Digital Cousins, an approach to robotic learning that improves generalization over the digital twin approach.
Miss the conference or want to watch the discussions? Visit the Stanford HAI YouTube channel. Videos are posted within a week of an event.