HAI Weekly Seminar with Jeannette Bohg | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Your browser does not support the video tag.
eventSeminar

HAI Weekly Seminar with Jeannette Bohg

Status
Past
Date
Wednesday, October 21, 2020 10:00 AM - 11:00 AM PST/PDT
Topics
Robotics

Scaffolding and Imitation Learning - Human Learning Principles Transferred to Robots

Learning contact-rich, robotic manipulation skills is a challenging problem due to the high-dimensionality of the state and action space as well as uncertainty from noisy sensors and inaccurate motor control. In this talk, I want to show how two principles of human learning can be transferred to robots to combat these factors and achieve more robust manipulation in a variety of tasks.

The first principle is scaffolding. Humans actively exploit contact constraints in the environment. By adopting a similar strategy, robots can also achieve more robust manipulation. In this talk, I will present an approach that enables a robot to autonomously modify its environment and thereby discover how to ease manipulation skill learning. Specifically, we provide the robot with fixtures that it can freely place within the environment. These fixtures provide hard constraints that limit the outcome of robot actions. Thereby, they funnel uncertainty from perception and motor control and scaffold manipulation skill learning. We show that manipulation skill learning is dramatically sped up through this way of scaffolding. 

The second principle is learning from demonstrations through imitation. Humans have gradually developed language, mastered complex motor skills, created and utilized sophisticated tools. The act of conceptualization is fundamental to these abilities because it allows humans to mentally represent, summarize and abstract diverse knowledge and skills. By means of abstraction, concepts that we learn from a limited number of examples can be extended to a potentially infinite set of new and unanticipated situations and they can be more easily taught to others by demonstration.

I will present work that gives robots the ability to acquire a variety of manipulation concepts that act as mental representations of verbs in a natural language instruction. We propose to use learning from human demonstrations of manipulation actions as recorded in large-scale video data sets that are annotated with natural language instructions. In extensive simulation experiments, we show that the policy learned in the proposed way can perform a large percentage of the 78 different manipulation tasks on which it was trained. We show that the policy generalizes over variations of the environment. We also show examples of successful generalization over novel but similar instructions.

Speaker
Jeannette Bohg
Assistant Professor of Computer Science, Stanford University

Watch Event Recording

Share
Link copied to clipboard!
More from HAI and SDS seminars
  • Inside the 2026 AI Index Report | Stanford HAI
    SeminarMay 20, 202612:00 PM - 1:15 PM
    May
    20
    2026

    The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

Related Events

Inside the 2026 AI Index Report | Stanford HAI
SeminarMay 20, 202612:00 PM - 1:15 PM
May
20
2026

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

Seminar

Inside the 2026 AI Index Report | Stanford HAI

May 20, 202612:00 PM - 1:15 PM

The AI Index, currently in its ninth year, tracks, collates, distills, and visualizes data relating to artificial intelligence.

AI+Science: Accelerating Discovery
ConferenceMay 05, 20268:30 AM - 6:45 PM
May
05
2026

AI+Science: Accelerating Discovery is an interdisciplinary conference bringing together researchers across physics, mathematics, chemistry, biology, neuroscience, and more to examine how AI is reshaping scientific discovery.

Conference

AI+Science: Accelerating Discovery

May 05, 20268:30 AM - 6:45 PM

AI+Science: Accelerating Discovery is an interdisciplinary conference bringing together researchers across physics, mathematics, chemistry, biology, neuroscience, and more to examine how AI is reshaping scientific discovery.

Wolfgang Lehrach | Code World Models for General Game Playing
SeminarMay 13, 202612:00 PM - 1:15 PM
May
13
2026

While Large Language Models (LLMs) show promise in many domains, relying on them for direct policy generation in games often results in illegal moves and poor strategic play.

Seminar

Wolfgang Lehrach | Code World Models for General Game Playing

May 13, 202612:00 PM - 1:15 PM

While Large Language Models (LLMs) show promise in many domains, relying on them for direct policy generation in games often results in illegal moves and poor strategic play.