Timezone: »

Ready Policy One: World Building Through Active Learning
Philip Ball · Jack Parker-Holder · Aldo Pacchiano · Krzysztof Choromanski · Stephen Roberts

Tue Jul 14 12:00 PM -- 12:45 PM & Wed Jul 15 01:00 AM -- 01:45 AM (PDT) @

Model-Based Reinforcement Learning (MBRL) offers a promising direction for sample efficient learning, often achieving state of the art results for continuous control tasks. However many existing MBRL methods rely on combining greedy policies with exploration heuristics, and even those which utilize principled exploration bonuses construct dual objectives in an ad hoc fashion. In this paper we introduce Ready Policy One (RP1), a framework that views MBRL as an active learning problem, where we aim to improve the world model in the fewest samples possible. RP1 achieves this by utilizing a hybrid objective function, which crucially adapts during optimization, allowing the algorithm to trade off reward v.s. exploration at different stages of learning. In addition, we introduce a principled mechanism to terminate sample collection once we have a rich enough trajectory batch to improve the model. We rigorously evaluate our method on a variety of continuous control tasks, and demonstrate statistically significant gains over existing approaches.

Author Information

Philip Ball (University of Oxford)
Jack Parker-Holder (University of Oxford)
Aldo Pacchiano (UC Berkeley)
Krzysztof Choromanski (Google)
Stephen Roberts (University of Oxford)

More from the Same Authors