Timezone: »
We use functional mirror ascent to propose a general framework (referred to as FMA-PG) for designing policy gradient methods. The functional perspective distinguishes between a policy's functional representation (what are its sufficient statistics) and its parameterization (how are these statistics represented), and naturally results in computationally efficient off-policy updates. For simple policy parameterizations, the FMA-PG framework ensures that the optimal policy is a fixed point of the updates. It also allows us to handle complex policy parameterizations (e.g., neural networks) while guaranteeing policy improvement. Our framework unifies several PG methods and opens the way for designing sample-efficient variants of existing methods. Moreover, it recovers important implementation heuristics (e.g., using forward vs reverse KL divergence) in a principled way. With a softmax functional representation, FMA-PG results in a variant of TRPO with additional desirable properties. It also suggests an improved variant of PPO, whose robustness we empirically demonstrate on MuJoCo.
Author Information
Sharan Vaswani (University of Alberta)
Olivier Bachem (Google Brain)
Simone Totaro (Mila)
Matthieu Geist (Google)
Marlos C. Machado (DeepMind)
Pablo Samuel Castro (Google Brain)
Pablo was born and raised in Quito, Ecuador, and moved to Montreal after high school to study at McGill. He stayed in Montreal for the next 10 years, finished his bachelors, worked at a flight simulator company, and then eventually obtained his masters and PhD at McGill, focusing on Reinforcement Learning. After his PhD Pablo did a 10-month postdoc in Paris before moving to Pittsburgh to join Google. He has worked at Google for almost 6 years, and is currently a research Software Engineer in Google Brain in Montreal, focusing on fundamental Reinforcement Learning research, as well as Machine Learning and Music. Aside from his interest in coding/AI/math, Pablo is an active musician (https://www.psctrio.com), loves running (5 marathons so far, including Boston!), and discussing politics and activism.
Nicolas Le Roux (Google)
More from the Same Authors
-
2021 : Representation Learning for Out-of-distribution Generalization in Downstream Tasks »
Frederik Träuble · Andrea Dittadi · Manuel Wuthrich · Felix Widmaier · Peter V Gehler · Ole Winther · Francesco Locatello · Olivier Bachem · Bernhard Schölkopf · Stefan Bauer -
2021 : Exploration-Driven Representation Learning in Reinforcement Learning »
Akram Erraqabi · Mingde Zhao · Marlos C. Machado · Yoshua Bengio · Sainbayar Sukhbaatar · Ludovic Denoyer · Alessandro Lazaric -
2021 : Representation Learning for Out-of-distribution Generalization in Downstream Tasks »
Frederik Träuble · Andrea Dittadi · Manuel Wüthrich · Felix Widmaier · Peter Gehler · Ole Winther · Francesco Locatello · Olivier Bachem · Bernhard Schölkopf · Stefan Bauer -
2021 : Offline Reinforcement Learning as Anti-Exploration »
Shideh Rezaeifar · Robert Dadashi · Nino Vieillard · Léonard Hussenot · Olivier Bachem · Olivier Pietquin · Matthieu Geist -
2023 : Decision-Aware Actor-Critic with Function Approximation and Theoretical Guarantees »
Sharan Vaswani · Amirreza Kazemi · Reza Babanezhad · Nicolas Le Roux -
2023 Poster: A Connection between One-Step RL and Critic Regularization in Reinforcement Learning »
Benjamin Eysenbach · Matthieu Geist · Sergey Levine · Ruslan Salakhutdinov -
2023 Poster: Target-based Surrogates for Stochastic Optimization »
Jonathan Lavington · Sharan Vaswani · Reza Babanezhad · Mark Schmidt · Nicolas Le Roux -
2023 Poster: Policy Mirror Ascent for Efficient and Independent Learning in Mean Field Games »
Batuhan Yardim · Semih Cayci · Matthieu Geist · Niao He -
2023 Poster: The Dormant Neuron Phenomenon in Deep Reinforcement Learning »
Ghada Sokar · Rishabh Agarwal · Pablo Samuel Castro · Utku Evci -
2023 Oral: The Dormant Neuron Phenomenon in Deep Reinforcement Learning »
Ghada Sokar · Rishabh Agarwal · Pablo Samuel Castro · Utku Evci -
2023 Poster: Bigger, Better, Faster: Human-level Atari with human-level efficiency »
Max Schwarzer · Johan Obando Ceron · Aaron Courville · Marc Bellemare · Rishabh Agarwal · Pablo Samuel Castro -
2023 Poster: Regularization and Variance-Weighted Regression Achieves Minimax Optimality in Linear MDPs: Theory and Practice »
Toshinori Kitamura · Tadashi Kozuno · Yunhao Tang · Nino Vieillard · Michal Valko · Wenhao Yang · Jincheng Mei · Pierre Menard · Mohammad Gheshlaghi Azar · Remi Munos · Olivier Pietquin · Matthieu Geist · Csaba Szepesvari · Wataru Kumagai · Yutaka Matsuo -
2022 : Estimating Policy Functions in Payments Systems Using Reinforcement Learning »
Pablo Samuel Castro -
2022 Poster: Large Batch Experience Replay »
Thibault Lahire · Matthieu Geist · Emmanuel Rachelson -
2022 Poster: Continuous Control with Action Quantization from Demonstrations »
Robert Dadashi · Léonard Hussenot · Damien Vincent · Sertan Girgin · Anton Raichuk · Matthieu Geist · Olivier Pietquin -
2022 Oral: Large Batch Experience Replay »
Thibault Lahire · Matthieu Geist · Emmanuel Rachelson -
2022 Spotlight: Continuous Control with Action Quantization from Demonstrations »
Robert Dadashi · Léonard Hussenot · Damien Vincent · Sertan Girgin · Anton Raichuk · Matthieu Geist · Olivier Pietquin -
2022 Poster: Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent »
Sharan Vaswani · Benjamin Dubois-Taine · Reza Babanezhad -
2022 Poster: The State of Sparse Training in Deep Reinforcement Learning »
Laura Graesser · Utku Evci · Erich Elsen · Pablo Samuel Castro -
2022 Spotlight: The State of Sparse Training in Deep Reinforcement Learning »
Laura Graesser · Utku Evci · Erich Elsen · Pablo Samuel Castro -
2022 Oral: Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent »
Sharan Vaswani · Benjamin Dubois-Taine · Reza Babanezhad -
2022 Poster: Scalable Deep Reinforcement Learning Algorithms for Mean Field Games »
Mathieu Lauriere · Sarah Perrin · Sertan Girgin · Paul Muller · Ayush Jain · Theophile Cabannes · Georgios Piliouras · Julien Perolat · Romuald Elie · Olivier Pietquin · Matthieu Geist -
2022 Spotlight: Scalable Deep Reinforcement Learning Algorithms for Mean Field Games »
Mathieu Lauriere · Sarah Perrin · Sertan Girgin · Paul Muller · Ayush Jain · Theophile Cabannes · Georgios Piliouras · Julien Perolat · Romuald Elie · Olivier Pietquin · Matthieu Geist -
2021 Poster: Beyond Variance Reduction: Understanding the True Impact of Baselines on Policy Optimization »
Wesley Chung · Valentin Thomas · Marlos C. Machado · Nicolas Le Roux -
2021 Poster: Hyperparameter Selection for Imitation Learning »
Léonard Hussenot · Marcin Andrychowicz · Damien Vincent · Robert Dadashi · Anton Raichuk · Sabela Ramos · Nikola Momchev · Sertan Girgin · Raphael Marinier · Lukasz Stafiniak · Emmanuel Orsini · Olivier Bachem · Matthieu Geist · Olivier Pietquin -
2021 Spotlight: Beyond Variance Reduction: Understanding the True Impact of Baselines on Policy Optimization »
Wesley Chung · Valentin Thomas · Marlos C. Machado · Nicolas Le Roux -
2021 Oral: Hyperparameter Selection for Imitation Learning »
Léonard Hussenot · Marcin Andrychowicz · Damien Vincent · Robert Dadashi · Anton Raichuk · Sabela Ramos · Nikola Momchev · Sertan Girgin · Raphael Marinier · Lukasz Stafiniak · Emmanuel Orsini · Olivier Bachem · Matthieu Geist · Olivier Pietquin -
2021 Poster: Revisiting Rainbow: Promoting more insightful and inclusive deep reinforcement learning research »
Johan Obando Ceron · Pablo Samuel Castro -
2021 Poster: Offline Reinforcement Learning with Pseudometric Learning »
Robert Dadashi · Shideh Rezaeifar · Nino Vieillard · Léonard Hussenot · Olivier Pietquin · Matthieu Geist -
2021 Spotlight: Offline Reinforcement Learning with Pseudometric Learning »
Robert Dadashi · Shideh Rezaeifar · Nino Vieillard · Léonard Hussenot · Olivier Pietquin · Matthieu Geist -
2021 Spotlight: Revisiting Rainbow: Promoting more insightful and inclusive deep reinforcement learning research »
Johan Obando Ceron · Pablo Samuel Castro -
2020 Poster: Rigging the Lottery: Making All Tickets Winners »
Utku Evci · Trevor Gale · Jacob Menick · Pablo Samuel Castro · Erich Elsen -
2020 Poster: Weakly-Supervised Disentanglement Without Compromises »
Francesco Locatello · Ben Poole · Gunnar Ratsch · Bernhard Schölkopf · Olivier Bachem · Michael Tschannen -
2020 Poster: Automatic Shortcut Removal for Self-Supervised Representation Learning »
Matthias Minderer · Olivier Bachem · Neil Houlsby · Michael Tschannen -
2019 Poster: Understanding the Impact of Entropy on Policy Optimization »
Zafarali Ahmed · Nicolas Le Roux · Mohammad Norouzi · Dale Schuurmans -
2019 Oral: Understanding the Impact of Entropy on Policy Optimization »
Zafarali Ahmed · Nicolas Le Roux · Mohammad Norouzi · Dale Schuurmans -
2019 Poster: A Theory of Regularized Markov Decision Processes »
Matthieu Geist · Bruno Scherrer · Olivier Pietquin -
2019 Poster: Learning from a Learner »
alexis jacq · Matthieu Geist · Ana Paiva · Olivier Pietquin -
2019 Poster: Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations »
Francesco Locatello · Stefan Bauer · Mario Lucic · Gunnar Ratsch · Sylvain Gelly · Bernhard Schölkopf · Olivier Bachem -
2019 Poster: The Value Function Polytope in Reinforcement Learning »
Robert Dadashi · Marc Bellemare · Adrien Ali Taiga · Nicolas Le Roux · Dale Schuurmans -
2019 Poster: High-Fidelity Image Generation With Fewer Labels »
Mario Lucic · Michael Tschannen · Marvin Ritter · Xiaohua Zhai · Olivier Bachem · Sylvain Gelly -
2019 Oral: The Value Function Polytope in Reinforcement Learning »
Robert Dadashi · Marc Bellemare · Adrien Ali Taiga · Nicolas Le Roux · Dale Schuurmans -
2019 Oral: A Theory of Regularized Markov Decision Processes »
Matthieu Geist · Bruno Scherrer · Olivier Pietquin -
2019 Oral: Learning from a Learner »
alexis jacq · Matthieu Geist · Ana Paiva · Olivier Pietquin -
2019 Oral: High-Fidelity Image Generation With Fewer Labels »
Mario Lucic · Michael Tschannen · Marvin Ritter · Xiaohua Zhai · Olivier Bachem · Sylvain Gelly -
2019 Oral: Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations »
Francesco Locatello · Stefan Bauer · Mario Lucic · Gunnar Ratsch · Sylvain Gelly · Bernhard Schölkopf · Olivier Bachem