Timezone: »

 
Oral
Composing Entropic Policies using Divergence Correction
Jonathan Hunt · Andre Barreto · Timothy Lillicrap · Nicolas Heess

Tue Jun 11 11:30 AM -- 11:35 AM (PDT) @ Hall B

Deep reinforcement learning algorithms have achieved remarkable successes, but often require vast amounts of experience to solve a task. Composing skills mastered in one task in order to efficiently solve novel challenges promises dramatic improvements in data efficiency. Here, we build on two recent works composing behaviors represented in the form of action-value functions. We analyze prior methods and show that they perform poorly in some situations. As part of this analysis, we extend an important generalization of policy improvement to the maximum entropy framework and introduce an algorithm for the practical implementation of successor features in continuous action spaces. Then we propose a novel approach which addresses the failure cases of prior work and, in principle, recovers the optimal policy during transfer. This method works by explicitly learning the (discounted, future) divergence between base policies. We study this approach in the tabular case and on non-trivial continuous control problems with compositional structure and show that it outperforms or matches existing methods across all tasks considered.

Author Information

Jonathan Hunt (DeepMind)
Andre Barreto (DeepMind)
Timothy Lillicrap (Google DeepMind)
Nicolas Heess (DeepMind)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors