USA - 2021
For contributions to robot learning, including learning from demonstrations and deep reinforcement learning for robotic control
Pieter Abbeel has pioneered learning in robotics, an area of research and engineering that barely existed when he started his PhD, now almost 20 years ago, but since has become one of the most prevalent research paradigms in robotics, in large part thanks to his foundational contributions.
Abbeel has pioneered how to make robots learn, both through learning from human demonstrations ("apprenticeship learning") and through their own trial and error ("reinforcement learning"), which has formed the foundation for the next generation of robotics.
Apprenticeship Learning: Abbeel's doctoral dissertation showcased that a clever combination of learning and optimal control can enable helicopter control at the level of the best human pilots. Upon his arrival as a faculty member at UC Berkeley, he showed how to generalize these underlying ideas to robotic laundry folding and surgical suturing. More recently, he has pioneered few-shot imitation learning, where a robot is able to learn to perform a task from just one demonstration after having been pre-trained on a large set of demonstrations on related tasks.
Deep Reinforcement Learning: The harnessing of deep neural models in reinforcement learning (RL) has been a hot topic for several years, highlighted by Google DeepMind's Go player beating the human world champion. RL allows artificial intelligence systems to learn from "trial and error" feedback, alleviating the need for demonstrations. Abbeel is considered the leader of his generation in RL, especially as it pertains to robotics. His work on trust-region policy optimization provided the first reliable RL procedure for continuous control, as showcased on simulated robotic environments. Further from there, Abbeel has made several other pioneering contributions to Deep RL for robotics. These contributions include generalized advantage estimation, which enabled the first 3D robot locomotion learning; soft-actor critic, which is one the most popular Deep RL algorithms to-date; domain randomization, which showcases how learning across appropriately randomized simulators can generalize surprisingly well to the real world; and hindsight experience replay, which has been instrumental for Deep RL in sparse-reward/goal-oriented environments.
Abbeel's work has been widely recognized by his peers. His doctoral dissertation received the 2008 Dick Volz Best U.S. Ph.D. Thesis in Robotics & Automation Award. He received a Sloan Foundation Fellowship in 2011, an NSF Early Career Development Program Award (NSF-CAREER) in 2014, and a Presidential Early Career Award for Scientists and Engineers in 2016. He was elected an IEEE Fellow in 2018. His work is widely cited, and he has received numerous best-paper awards at top robotics and machine-learning conferences.
Abbeel's groundbreaking research has helped shape contemporary robotics and continues to drive the future of the field.