Timezone: »
Several algorithms have been proposed to sample non-uniformly the replay buffer of deep Reinforcement Learning (RL) agents to speed-up learning, but very few theoretical foundations of these sampling schemes have been provided. Among others, Prioritized Experience Replay appears as a hyperparameter sensitive heuristic, even though it can provide good performance. In this work, we cast the replay buffer sampling problem as an importance sampling one for estimating the gradient. This allows deriving the theoretically optimal sampling distribution, yielding the best theoretical convergence speed.Elaborating on the knowledge of the ideal sampling scheme, we exhibit new theoretical foundations of Prioritized Experience Replay. The optimal sampling distribution being intractable, we make several approximations providing good results in practice and introduce, among others, LaBER (Large Batch Experience Replay), an easy-to-code and efficient method for sampling the replay buffer. LaBER, which can be combined with Deep Q-Networks, distributional RL agents or actor-critic methods, yields improved performance over a diverse range of Atari games and PyBullet environments, compared to the base agent it is implemented on and to other prioritization schemes.
Author Information
Thibault Lahire (Université de Toulouse, ISAE-SUPAERO)
Matthieu Geist (Google)
Emmanuel Rachelson (ISAE-SUPAERO)
Dr. Emmanuel Rachelson is an associate professor in Machine Learning and Artificial Intelligence at ISAE-SUPAERO. He founded the Data and Decision Sciences Master-level curriculum, which he now oversees. He is also the leader of ISAE-SUPAERO Reinforcement Learning Initiative. He graduated from ISAE-SUPAERO and received a MS in Artificial Intelligence from University Paul Sabatier in 2005. He received a PhD in Artificial Intelligence from the University of Toulouse in 2009. His research focuses on robust sequential decision under uncertainty and he specializes in Reinforcement Learning, while keeping strong connections with the broader fields of Machine Learning and Operations Research. Among his current research interests are Robustness and Dependability in (Deep) Reinforcement Learning and Monte Carlo Tree Search. He has contributed to applications in energy management, UAV planning and control, robotics, satellite systems (radio resource management or imaging tasks), air traffic management, and aircraft design. One specific focus is on how Reinforcement Learning principles can be applied to control Optimization processes under resource constraints.
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Large Batch Experience Replay »
Thu. Jul 21st through Fri the 22nd Room Hall E #921
More from the Same Authors
-
2021 : A functional mirror ascent view of policy gradient methods with function approximation »
Sharan Vaswani · Olivier Bachem · Simone Totaro · Matthieu Geist · Marlos C. Machado · Pablo Samuel Castro · Nicolas Le Roux -
2021 : Offline Reinforcement Learning as Anti-Exploration »
Shideh Rezaeifar · Robert Dadashi · Nino Vieillard · Léonard Hussenot · Olivier Bachem · Olivier Pietquin · Matthieu Geist -
2023 Poster: A Connection between One-Step RL and Critic Regularization in Reinforcement Learning »
Benjamin Eysenbach · Matthieu Geist · Sergey Levine · Ruslan Salakhutdinov -
2023 Poster: Policy Mirror Ascent for Efficient and Independent Learning in Mean Field Games »
Batuhan Yardim · Semih Cayci · Matthieu Geist · Niao He -
2023 Poster: Regularization and Variance-Weighted Regression Achieves Minimax Optimality in Linear MDPs: Theory and Practice »
Toshinori Kitamura · Tadashi Kozuno · Yunhao Tang · Nino Vieillard · Michal Valko · Wenhao Yang · Jincheng Mei · Pierre Menard · Mohammad Gheshlaghi Azar · Remi Munos · Olivier Pietquin · Matthieu Geist · Csaba Szepesvari · Wataru Kumagai · Yutaka Matsuo -
2022 Poster: Continuous Control with Action Quantization from Demonstrations »
Robert Dadashi · Léonard Hussenot · Damien Vincent · Sertan Girgin · Anton Raichuk · Matthieu Geist · Olivier Pietquin -
2022 Spotlight: Continuous Control with Action Quantization from Demonstrations »
Robert Dadashi · Léonard Hussenot · Damien Vincent · Sertan Girgin · Anton Raichuk · Matthieu Geist · Olivier Pietquin -
2022 Poster: Scalable Deep Reinforcement Learning Algorithms for Mean Field Games »
Mathieu Lauriere · Sarah Perrin · Sertan Girgin · Paul Muller · Ayush Jain · Theophile Cabannes · Georgios Piliouras · Julien Perolat · Romuald Elie · Olivier Pietquin · Matthieu Geist -
2022 Spotlight: Scalable Deep Reinforcement Learning Algorithms for Mean Field Games »
Mathieu Lauriere · Sarah Perrin · Sertan Girgin · Paul Muller · Ayush Jain · Theophile Cabannes · Georgios Piliouras · Julien Perolat · Romuald Elie · Olivier Pietquin · Matthieu Geist -
2021 Poster: Hyperparameter Selection for Imitation Learning »
Léonard Hussenot · Marcin Andrychowicz · Damien Vincent · Robert Dadashi · Anton Raichuk · Sabela Ramos · Nikola Momchev · Sertan Girgin · Raphael Marinier · Lukasz Stafiniak · Emmanuel Orsini · Olivier Bachem · Matthieu Geist · Olivier Pietquin -
2021 Oral: Hyperparameter Selection for Imitation Learning »
Léonard Hussenot · Marcin Andrychowicz · Damien Vincent · Robert Dadashi · Anton Raichuk · Sabela Ramos · Nikola Momchev · Sertan Girgin · Raphael Marinier · Lukasz Stafiniak · Emmanuel Orsini · Olivier Bachem · Matthieu Geist · Olivier Pietquin -
2021 Poster: Offline Reinforcement Learning with Pseudometric Learning »
Robert Dadashi · Shideh Rezaeifar · Nino Vieillard · Léonard Hussenot · Olivier Pietquin · Matthieu Geist -
2021 Spotlight: Offline Reinforcement Learning with Pseudometric Learning »
Robert Dadashi · Shideh Rezaeifar · Nino Vieillard · Léonard Hussenot · Olivier Pietquin · Matthieu Geist -
2019 Poster: A Theory of Regularized Markov Decision Processes »
Matthieu Geist · Bruno Scherrer · Olivier Pietquin -
2019 Poster: Learning from a Learner »
alexis jacq · Matthieu Geist · Ana Paiva · Olivier Pietquin -
2019 Oral: A Theory of Regularized Markov Decision Processes »
Matthieu Geist · Bruno Scherrer · Olivier Pietquin -
2019 Oral: Learning from a Learner »
alexis jacq · Matthieu Geist · Ana Paiva · Olivier Pietquin