Timezone: »

 
ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only Onboard Sensors
Charles Sun · Jedrzej Orbik · Coline Devin · Abhishek Gupta · Glen Berseth · Sergey Levine

In this paper, we study how mobile manipulators can autonomously learn skills that require a combination of navigation and grasping. Learning robotic skills in the real world remains challenging without large scale data collection and supervision. These difficulties have often been sidestepped by limiting the robot to only manipulation or navigation, and by using human effort to provide demonstrations, task resets, and data labeling during the training process. Our aim is to devise a robotic reinforcement learning system for learning navigation and manipulation together, in a way that minimizes human intervention and enables continual learning under realistic assumptions. Specifically, our system, ReLMM, can learn continuously on a real-world platform without any environment instrumentation, with minimal human intervention, and without access to privileged information, such as maps, objects positions, or a global view of the environment. Our method employs a modularized policy with components for manipulation and navigation, where uncertainty over the manipulation value function drives exploration for the navigation controller, and the success of the manipulation module provides rewards for navigation. We evaluate our method on a room cleanup task, where the robot must pick up each item of clutter from the floor. After a brief grasp pretraining phase with human oversight, ReLMM can learn navigation and grasping together fully automatically, in around 40 hours of real-world training with minimal human intervention.

Author Information

Charles Sun (University of California, Berkeley)
Jedrzej Orbik (UC Berkeley)
Coline Devin (UC Berkeley)
Abhishek Gupta (UC Berkeley)
Glen Berseth (UC Berkeley)
Sergey Levine (UC Berkeley)
Sergey Levine

Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.

More from the Same Authors