Using Relative Novelty to Identify Useful Temporal Abstractions in Reinforcement Learning
Ozgur Simsek - University of Massachusetts Amherst
Andrew Barto - University of Massachusetts Amherst
We present a new method for automatically creating useful temporal abstractions in reinforcement learning. We argue that states that allow the agent to transition to a different region of the state space are useful subgoals, and propose a method for identifying them using the concept of relative novelty. When such a state is identified, a temporally-extended activity (e.g., an option) is generated that takes the agent efficiently to this state. We illustrate the utility of the method in a number of tasks.