Skip to yearly menu bar Skip to main content


Poster

Skill Discovery for Exploration and Planning using Deep Skill Graphs

Akhil Bagaria · Jason Senthil · George Konidaris

Virtual

Keywords: [ Reinforcement Learning and Planning ] [ Deep RL ]


Abstract:

We introduce a new skill-discovery algorithm that builds a discrete graph representation of large continuous MDPs, where nodes correspond to skill subgoals and the edges to skill policies. The agent constructs this graph during an unsupervised training phase where it interleaves discovering skills and planning using them to gain coverage over ever-increasing portions of the state-space. Given a novel goal at test time, the agent plans with the acquired skill graph to reach a nearby state, then switches to learning to reach the goal. We show that the resulting algorithm, Deep Skill Graphs, outperforms both flat and existing hierarchical reinforcement learning methods on four difficult continuous control tasks.

Chat is not available.