Discovering Options for Exploration by Minimizing Cover Time
Yuu Jinnai · Jee Won Park · David Abel · George Konidaris
Keywords:
Theory and Algorithms
2019 Poster
Abstract
One of the main challenges in reinforcement learning is solving tasks with sparse reward. We show that the difficulty of discovering a distant rewarding state in an MDP is bounded by the expected cover time of a random walk over the graph induced by the MDP's transition dynamics. We therefore propose to accelerate exploration by constructing options that minimize cover time. We introduce a new option discovery algorithm that diminishes the expected cover time by connecting the most distant states in the state-space graph with options. We show empirically that the proposed algorithm improves learning in several domains with sparse rewards.
Chat is not available.
Successful Page Load