Timezone: »
Efficient exploration is an unsolved problem in Reinforcement Learning which is usually addressed by reactively rewarding the agent for fortuitously encountering novel situations. This paper introduces an efficient active exploration algorithm, Model-Based Active eXploration (MAX), which uses an ensemble of forward models to plan to observe novel events, where novelty is assessed by measuring the potential disagreement between ensemble members using a principled criterion derived from the Bayesian perspective. We show empirically that in semi-random discrete environments where directed exploration is critical to make progress, MAX is at least an order of magnitude more efficient than strong baselines. MAX also scales to high-dimensional continuous environments where it builds task-agnostic models that can be used for any downstream task.
Author Information
Pranav Shyam (NNAISENSE)
Wojciech Jaśkowski (NNAISENSE)
Faustino Gomez (NNAISENSE SA)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Model-Based Active Exploration »
Thu Jun 13th 01:30 -- 04:00 AM Room Pacific Ballroom
More from the Same Authors
-
2017 Poster: Attentive Recurrent Comparators »
Pranav Shyam · Shubham Gupta · Ambedkar Dukkipati -
2017 Talk: Attentive Recurrent Comparators »
Pranav Shyam · Shubham Gupta · Ambedkar Dukkipati