Skip to yearly menu bar Skip to main content


Learning Portable Representations for High-Level Planning

Steven James · Benjamin Rosman · George Konidaris

Keywords: [ Planning and Control ] [ Representation Learning ] [ Transfer and Multitask Learning ] [ Reinforcement Learning ] [ Reinforcement Learning - General ]


We present a framework for autonomously learning a portable representation that describes a collection of low-level continuous environments. We show that these abstract representations can be learned in a task-independent egocentric space specific to the agent that, when grounded with problem-specific information, are provably sufficient for planning. We demonstrate transfer in two different domains, where an agent learns a portable, task-independent symbolic vocabulary, as well as operators expressed in that vocabulary, and then learns to instantiate those operators on a per-task basis. This reduces the number of samples required to learn a representation of a new task.

Chat is not available.