Timezone: »
Environments with procedurally generated content serve as important benchmarks for testing systematic generalization in deep reinforcement learning. In this setting, each level is an algorithmically created environment instance with a unique configuration of its factors of variation. Training on a prespecified subset of levels allows for testing generalization to unseen levels. What can be learned from a level depends on the current policy, yet prior work defaults to uniform sampling of training levels independently of the policy. We introduce Prioritized Level Replay (PLR), a general framework for selectively sampling the next training level by prioritizing those with higher estimated learning potential when revisited in the future. We show TD-errors effectively estimate a level's future learning potential and, when used to guide the sampling procedure, induce an emergent curriculum of increasingly difficult levels. By adapting the sampling of training levels, PLR significantly improves sample-efficiency and generalization on Procgen Benchmark—matching the previous state-of-the-art in test return—and readily combines with other methods. Combined with the previous leading method, PLR raises the state-of-the-art to over 76% improvement in test return relative to standard RL baselines.
Author Information
Minqi Jiang (University College London & Facebook AI Research)
Edward Grefenstette (Facebook AI Research & UCL)
Tim Rocktäschel (Facebook AI Research & University College London)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Prioritized Level Replay »
Wed. Jul 21st 01:20 -- 01:25 AM Room
More from the Same Authors
-
2022 Poster: Evolving Curricula with Regret-Based Environment Design »
Jack Parker-Holder · Minqi Jiang · Michael Dennis · Mikayel Samvelyan · Jakob Foerster · Edward Grefenstette · Tim Rocktäschel -
2022 Spotlight: Evolving Curricula with Regret-Based Environment Design »
Jack Parker-Holder · Minqi Jiang · Michael Dennis · Mikayel Samvelyan · Jakob Foerster · Edward Grefenstette · Tim Rocktäschel -
2020 : The NetHack Learning Environment Q&A »
Tim Rocktäschel · Katja Hofmann -
2020 : The NetHack Learning Environment »
Tim Rocktäschel -
2020 Workshop: 1st Workshop on Language in Reinforcement Learning (LaReL) »
Nantas Nardelli · Jelena Luketina · Nantas Nardelli · Jakob Foerster · Victor Zhong · Jacob Andreas · Tim Rocktäschel · Edward Grefenstette · Tim Rocktäschel -
2020 Poster: Learning Reasoning Strategies in End-to-End Differentiable Proving »
Pasquale Minervini · Sebastian Riedel · Pontus Stenetorp · Edward Grefenstette · Tim Rocktäschel -
2019 Poster: A Baseline for Any Order Gradient Estimation in Stochastic Computation Graphs »
Jingkai Mao · Jakob Foerster · Tim Rocktäschel · Maruan Al-Shedivat · Gregory Farquhar · Shimon Whiteson -
2019 Poster: CompILE: Compositional Imitation Learning and Execution »
Thomas Kipf · Yujia Li · Hanjun Dai · Vinicius Zambaldi · Alvaro Sanchez-Gonzalez · Edward Grefenstette · Pushmeet Kohli · Peter Battaglia -
2019 Oral: CompILE: Compositional Imitation Learning and Execution »
Thomas Kipf · Yujia Li · Hanjun Dai · Vinicius Zambaldi · Alvaro Sanchez-Gonzalez · Edward Grefenstette · Pushmeet Kohli · Peter Battaglia -
2019 Oral: A Baseline for Any Order Gradient Estimation in Stochastic Computation Graphs »
Jingkai Mao · Jakob Foerster · Tim Rocktäschel · Maruan Al-Shedivat · Gregory Farquhar · Shimon Whiteson -
2017 Poster: Discovering Discrete Latent Topics with Neural Variational Inference »
Yishu Miao · Edward Grefenstette · Phil Blunsom -
2017 Talk: Discovering Discrete Latent Topics with Neural Variational Inference »
Yishu Miao · Edward Grefenstette · Phil Blunsom -
2017 Poster: Programming with a Differentiable Forth Interpreter »
Matko Bošnjak · Tim Rocktäschel · Jason Naradowsky · Sebastian Riedel -
2017 Talk: Programming with a Differentiable Forth Interpreter »
Matko Bošnjak · Tim Rocktäschel · Jason Naradowsky · Sebastian Riedel