Skip to yearly menu bar Skip to main content


Poster

PcLast: Discovering Plannable Continuous Latent States

ANURAG KOUL · Shivakanth Sujit · Shaoru Chen · Benjamin Evans · Lili Wu · Byron Xu · Rajan Chari · Riashat Islam · Raihan Seraj · Yonathan Efroni · Lekan Molu · Miroslav Dudik · John Langford · Alex Lamb

Hall C 4-9 #1200
[ ] [ Project Page ] [ Paper PDF ]
Wed 24 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract: Goal-conditioned planning benefits from learned low-dimensional representations of rich observations. While compact latent representations typically learned from variational autoencoders or inverse dynamics enable goal-conditioned decision making, they ignore state reachability, hampering their performance. In this paper, we learn a representation that associates reachable states together for effective planning and goal-conditioned policy learning. We first learn a latent representation with multi-step inverse dynamics (to remove distracting information), and then transform this representation to associate reachable states together in $\ell_2$ space. Our proposals are rigorously tested in various simulation testbeds. Numerical results in reward-based settings show significant improvements in sampling efficiency. Further, in reward-free settings this approach yields layered state abstractions that enable computationally efficient hierarchical planning for reaching ad hoc goals with zero additional samples.

Chat is not available.