Poster

Hierarchically Decoupled Imitation For Morphological Transfer

Donald Hejna · Lerrel Pinto · Pieter Abbeel

Keywords: [ Deep Reinforcement Learning ] [ Robotics ] [ Reinforcement Learning - Deep RL ]

[ Abstract ]
Tue 14 Jul 9 a.m. PDT — 9:45 a.m. PDT
Tue 14 Jul 8 p.m. PDT — 8:45 p.m. PDT

Abstract:

Learning long-range behaviors on complex high-dimensional agents is a fundamental problem in robot learning. For such tasks, we argue that transferring learned information from a morphologically simpler agent can massively improve the sample efficiency of a more complex one. To this end, we propose a hierarchical decoupling of policies into two parts: an independently learned low-level policy and a transferable high-level policy. To remedy poor transfer performance due to mismatch in morphologies, we contribute two key ideas. First, we show that incentivizing a complex agent's low-level to imitate a simpler agent's low-level significantly improves zero-shot high-level transfer. Second, we show that KL-regularized training of the high level stabilizes learning and prevents mode-collapse. Finally, on a suite of publicly released navigation and manipulation environments, we demonstrate the applicability of hierarchical transfer on long-range tasks across morphologies.

Chat is not available.