Oral
CompILE: Compositional Imitation Learning and Execution
Thomas Kipf · Yujia Li · Hanjun Dai · Vinicius Zambaldi · Alvaro Sanchez · Edward Grefenstette · Pushmeet Kohli · Peter Battaglia

Wed Jun 12th 04:40 -- 05:00 PM @ Hall B

We introduce Compositional Imitation Learning and Execution (CompILE): a framework for learning reusable, variable-length segments of hierarchically-structured behavior from demonstration data. CompILE uses a novel unsupervised, fully-differentiable sequence segmentation module to learn latent encodings of sequential data that can be re-composed and executed to perform new tasks. Once trained, our model generalizes to sequences of longer length and from environment instances not seen during training. We evaluate CompILE in a challenging 2D multi-task environment and a continuous control task, and show that it can find correct task boundaries and event encodings in an unsupervised manner. Latent codes and associated behavior policies discovered by CompILE can be used by a hierarchical agent, where the high-level policy selects actions in the latent code space, and the low-level, task-specific policies are simply the learned decoders. We found that our CompILE-based agent could learn given only sparse rewards, where agents without task-specific policies struggle.

Author Information

Thomas Kipf (University of Amsterdam)
Yujia Li (DeepMind)
Hanjun Dai (Georgia Tech)
Vinicius Zambaldi (Deepmind)
Alvaro Sanchez (DeepMind)
Edward Grefenstette (Facebook AI Research / UCL)
Pushmeet Kohli (DeepMind)
Peter Battaglia (DeepMind)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors