Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Many Facets of Preference-Based Learning

Learning Higher Order Skills that Efficiently Compose

Anthony Liu · Dong Ki Kim · Sungryull Sohn · Honglak Lee


Abstract:

Hierarchical reinforcement learning allows an agent to effectively solve complex tasks by leveraging the compositional structures of tasks and executing a sequence of skills. However, our examination shows that prior work focuses on learning individual skills without considering how to efficiently combine them, which can lead to sub-optimal performance.To address this problem, we propose a novel framework, called second-order skills (SOS), for learning skills to facilitate the efficient execution of skills in sequence. Specifically, second order skills (which can be generalized to higher orders) aim to learn skills from an extended perspective that takes into account the next skill required to accomplish a task.We theoretically demonstrate that our method guarantees more efficient performance in the downstream task compared to previous approaches that do not consider second-order skills. Also, our empirical experiments show that learning second-order skills results in improved learning performance compared to state-of-the-art in baselines across diverse benchmark domains.

Chat is not available.