Skip to yearly menu bar Skip to main content


CoMic: Complementary Task Learning & Mimicry for Reusable Skills

Leonard Hasenclever · Fabio Pardo · Raia Hadsell · Nicolas Heess · Josh Merel

Keywords: [ Deep Reinforcement Learning ] [ Planning and Control ] [ Transfer and Multitask Learning ] [ Planning, Control, and Multiagent Learning ]


Learning to control complex bodies and reuse learned behaviors is a longstanding challenge in continuous control. We study the problem of learning reusable humanoid skills by imitating motion capture data and joint training with complementary tasks. We show that it is possible to learn reusable skills through reinforcement learning on 50 times more motion capture data than prior work. We systematically compare a variety of different network architectures across different data regimes both in terms of imitation performance as well as transfer to challenging locomotion tasks. Finally we show that it is possible to interleave the motion capture tracking with training on complementary tasks, enriching the resulting skill space, and enabling the reuse of skills not well covered by the motion capture data such as getting up from the ground or catching a ball.

Chat is not available.