AI-driven automation is reshaping industries while raising concerns about job displacement and economic transitions.
Robots are becoming a core building block in engineering and healthcare applications, altering the way many industries operate, and improving quality of life for everyone. With AI, robots are further given the ability to learn and adapt so that they can work collaboratively alongside humans and other robots in real-world environments. This industry brief provides a cross-section of key research – at HAI and across Stanford – that leverages AI methods into new algorithms for human robot interaction and robot navigation. Discover how researchers are designing intelligent robots that learn and adapt to human demonstration, and how they could be used to disrupt and create markets in a wide range of industries including manufacturing, healthcare, autonomous vehicles, and many more.
Robots are becoming a core building block in engineering and healthcare applications, altering the way many industries operate, and improving quality of life for everyone. With AI, robots are further given the ability to learn and adapt so that they can work collaboratively alongside humans and other robots in real-world environments. This industry brief provides a cross-section of key research – at HAI and across Stanford – that leverages AI methods into new algorithms for human robot interaction and robot navigation. Discover how researchers are designing intelligent robots that learn and adapt to human demonstration, and how they could be used to disrupt and create markets in a wide range of industries including manufacturing, healthcare, autonomous vehicles, and many more.
Real-world planning problems, including autonomous driving and sustainable energy applications like carbon storage and resource exploration, have recently been modeled as partially observable Markov decision processes (POMDPs) and solved using approximate methods. To solve high-dimensional POMDPs in practice, state- of-the-art methods use online planning with problem-specific heuristics to reduce planning horizons and make the problems tractable. Algorithms that learn approximations to replace heuristics have recently found success in large-scale fully observable domains. The key insight is the combination of online Monte Carlo tree search with offline neural network approximations of the optimal policy and value function. In this work, we bring this insight to partially observable domains and propose BetaZero, a belief-state planning algorithm for high-dimensional POMDPs. BetaZero learns offline approximations that replace heuristics to enable online decision making in long-horizon problems. We address several challenges inherent in large-scale partially observable domains; namely challenges of transitioning in stochastic environments, prioritizing action branching with a limited search bud- get, and representing beliefs as input to the network. To formalize the use of all limited search information, we train against a novel Q-weighted visit counts policy. We test BetaZero on various well-established POMDP benchmarks found in the literature and a real-world problem of critical mineral exploration. Experiments show that BetaZero outperforms state-of-the-art POMDP solvers on a variety of tasks.1
Real-world planning problems, including autonomous driving and sustainable energy applications like carbon storage and resource exploration, have recently been modeled as partially observable Markov decision processes (POMDPs) and solved using approximate methods. To solve high-dimensional POMDPs in practice, state- of-the-art methods use online planning with problem-specific heuristics to reduce planning horizons and make the problems tractable. Algorithms that learn approximations to replace heuristics have recently found success in large-scale fully observable domains. The key insight is the combination of online Monte Carlo tree search with offline neural network approximations of the optimal policy and value function. In this work, we bring this insight to partially observable domains and propose BetaZero, a belief-state planning algorithm for high-dimensional POMDPs. BetaZero learns offline approximations that replace heuristics to enable online decision making in long-horizon problems. We address several challenges inherent in large-scale partially observable domains; namely challenges of transitioning in stochastic environments, prioritizing action branching with a limited search bud- get, and representing beliefs as input to the network. To formalize the use of all limited search information, we train against a novel Q-weighted visit counts policy. We test BetaZero on various well-established POMDP benchmarks found in the literature and a real-world problem of critical mineral exploration. Experiments show that BetaZero outperforms state-of-the-art POMDP solvers on a variety of tasks.1
Robots are becoming a core building block in engineering and healthcare applications, altering the way many industries operate, and improving quality of life for everyone. With AI, robots are further given the ability to learn and adapt so that they can work collaboratively alongside humans and other robots in real-world environments. This industry brief provides a cross-section of key research – at HAI and across Stanford – that leverages AI methods into new algorithms for human robot interaction and robot navigation. Discover how researchers are designing intelligent robots that learn and adapt to human demonstration, and how they could be used to disrupt and create markets in a wide range of industries including manufacturing, healthcare, autonomous vehicles, and many more.
Robots are becoming a core building block in engineering and healthcare applications, altering the way many industries operate, and improving quality of life for everyone. With AI, robots are further given the ability to learn and adapt so that they can work collaboratively alongside humans and other robots in real-world environments. This industry brief provides a cross-section of key research – at HAI and across Stanford – that leverages AI methods into new algorithms for human robot interaction and robot navigation. Discover how researchers are designing intelligent robots that learn and adapt to human demonstration, and how they could be used to disrupt and create markets in a wide range of industries including manufacturing, healthcare, autonomous vehicles, and many more.
Real-world planning problems, including autonomous driving and sustainable energy applications like carbon storage and resource exploration, have recently been modeled as partially observable Markov decision processes (POMDPs) and solved using approximate methods. To solve high-dimensional POMDPs in practice, state- of-the-art methods use online planning with problem-specific heuristics to reduce planning horizons and make the problems tractable. Algorithms that learn approximations to replace heuristics have recently found success in large-scale fully observable domains. The key insight is the combination of online Monte Carlo tree search with offline neural network approximations of the optimal policy and value function. In this work, we bring this insight to partially observable domains and propose BetaZero, a belief-state planning algorithm for high-dimensional POMDPs. BetaZero learns offline approximations that replace heuristics to enable online decision making in long-horizon problems. We address several challenges inherent in large-scale partially observable domains; namely challenges of transitioning in stochastic environments, prioritizing action branching with a limited search bud- get, and representing beliefs as input to the network. To formalize the use of all limited search information, we train against a novel Q-weighted visit counts policy. We test BetaZero on various well-established POMDP benchmarks found in the literature and a real-world problem of critical mineral exploration. Experiments show that BetaZero outperforms state-of-the-art POMDP solvers on a variety of tasks.1
Real-world planning problems, including autonomous driving and sustainable energy applications like carbon storage and resource exploration, have recently been modeled as partially observable Markov decision processes (POMDPs) and solved using approximate methods. To solve high-dimensional POMDPs in practice, state- of-the-art methods use online planning with problem-specific heuristics to reduce planning horizons and make the problems tractable. Algorithms that learn approximations to replace heuristics have recently found success in large-scale fully observable domains. The key insight is the combination of online Monte Carlo tree search with offline neural network approximations of the optimal policy and value function. In this work, we bring this insight to partially observable domains and propose BetaZero, a belief-state planning algorithm for high-dimensional POMDPs. BetaZero learns offline approximations that replace heuristics to enable online decision making in long-horizon problems. We address several challenges inherent in large-scale partially observable domains; namely challenges of transitioning in stochastic environments, prioritizing action branching with a limited search bud- get, and representing beliefs as input to the network. To formalize the use of all limited search information, we train against a novel Q-weighted visit counts policy. We test BetaZero on various well-established POMDP benchmarks found in the literature and a real-world problem of critical mineral exploration. Experiments show that BetaZero outperforms state-of-the-art POMDP solvers on a variety of tasks.1