Timezone: »
A general sample complexity analysis of vanilla policy gradient
Rui Yuan · Robert Gower · Alessandro Lazaric
The policy gradient (PG) is one of the most popular methods for solving reinforcement learning (RL) problems. However, a solid theoretical understanding of even the ``vanilla'' PG has remained elusive for long time. In this paper, we apply recent tools developed for the analysis of SGD in non-convex optimization to obtain convergence guarantees for both REINFORCE and GPOMDP under smoothness assumption on the objective function and weak conditions on the second moment of the norm of the estimated gradient. When instantiated under common assumptions on the policy space, our general result immediately recovers existing $O(\epsilon^{-4})$ sample complexity guarantees, but for wider ranges of parameters (e.g., step size and batch size m) w.r.t. previous literature. Notably, our result includes the single trajectory case (i.e., m=1) and it provides a more accurate analysis of the dependency on problem-specific parameters by fixing previous results available in the literature. We believe that the integration of state-of-the-art tools from non-convex optimization may lead to identify a much broader range of problems where PG methods enjoy strong theoretical guarantees.
Author Information
Rui Yuan (Facebook AI Research)
Robert Gower (Telecom Paristech)
https://gowerrobert.github.io/
Alessandro Lazaric (Facebook AI Research)
More from the Same Authors
-
2021 : Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection »
Matteo Papini · Andrea Tirinzoni · Aldo Pacchiano · Marcello Restelli · Alessandro Lazaric · Matteo Pirotta -
2021 : A Fully Problem-Dependent Regret Lower Bound for Finite-Horizon MDPs »
Andrea Tirinzoni · Matteo Pirotta · Alessandro Lazaric -
2021 : Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret »
Jean Tarbouriech · Jean Tarbouriech · Simon Du · Matteo Pirotta · Michal Valko · Alessandro Lazaric -
2021 : Direct then Diffuse: Incremental Unsupervised Skill Discovery for State Covering and Goal Reaching »
Pierre-Alexandre Kamienny · Jean Tarbouriech · Alessandro Lazaric · Ludovic Denoyer -
2021 : Exploration-Driven Representation Learning in Reinforcement Learning »
Akram Erraqabi · Mingde Zhao · Marlos C. Machado · Yoshua Bengio · Sainbayar Sukhbaatar · Ludovic Denoyer · Alessandro Lazaric -
2023 Poster: A Model Based Method for Minimizing CVaR and Beyond »
Si Yi Meng · Robert Gower -
2023 Poster: Layered State Discovery for Incremental Autonomous Exploration »
Liyu Chen · Andrea Tirinzoni · Alessandro Lazaric · Matteo Pirotta -
2022 Workshop: Responsible Decision Making in Dynamic Environments »
Virginie Do · Thorsten Joachims · Alessandro Lazaric · Joelle Pineau · Matteo Pirotta · Harsh Satija · Nicolas Usunier -
2022 Poster: Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times »
Daniele Calandriello · Luigi Carratino · Alessandro Lazaric · Michal Valko · Lorenzo Rosasco -
2022 Spotlight: Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times »
Daniele Calandriello · Luigi Carratino · Alessandro Lazaric · Michal Valko · Lorenzo Rosasco -
2021 : Invited Talk by Alessandro Lazaric »
Alessandro Lazaric -
2021 Poster: Leveraging Good Representations in Linear Contextual Bandits »
Matteo Papini · Andrea Tirinzoni · Marcello Restelli · Alessandro Lazaric · Matteo Pirotta -
2021 Spotlight: Leveraging Good Representations in Linear Contextual Bandits »
Matteo Papini · Andrea Tirinzoni · Marcello Restelli · Alessandro Lazaric · Matteo Pirotta -
2021 Poster: Reinforcement Learning with Prototypical Representations »
Denis Yarats · Rob Fergus · Alessandro Lazaric · Lerrel Pinto -
2021 Spotlight: Reinforcement Learning with Prototypical Representations »
Denis Yarats · Rob Fergus · Alessandro Lazaric · Lerrel Pinto -
2020 Poster: No-Regret Exploration in Goal-Oriented Reinforcement Learning »
Jean Tarbouriech · Evrard Garcelon · Michal Valko · Matteo Pirotta · Alessandro Lazaric -
2020 Poster: Efficient Optimistic Exploration in Linear-Quadratic Regulators via Lagrangian Relaxation »
Marc Abeille · Alessandro Lazaric -
2020 Poster: Learning Near Optimal Policies with Low Inherent Bellman Error »
Andrea Zanette · Alessandro Lazaric · Mykel Kochenderfer · Emma Brunskill -
2020 Poster: Meta-learning with Stochastic Linear Bandits »
Leonardo Cella · Alessandro Lazaric · Massimiliano Pontil -
2020 Poster: Near-linear time Gaussian process optimization with adaptive batching and resparsification »
Daniele Calandriello · Luigi Carratino · Alessandro Lazaric · Michal Valko · Lorenzo Rosasco -
2019 Poster: SGD: General Analysis and Improved Rates »
Robert Gower · Nicolas Loizou · Xun Qian · Alibek Sailanbayev · Egor Shulgin · Peter Richtarik -
2019 Oral: SGD: General Analysis and Improved Rates »
Robert Gower · Nicolas Loizou · Xun Qian · Alibek Sailanbayev · Egor Shulgin · Peter Richtarik -
2018 Poster: Improved large-scale graph learning through ridge spectral sparsification »
Daniele Calandriello · Alessandro Lazaric · Ioannis Koutis · Michal Valko -
2018 Oral: Improved large-scale graph learning through ridge spectral sparsification »
Daniele Calandriello · Alessandro Lazaric · Ioannis Koutis · Michal Valko -
2018 Poster: Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning »
Ronan Fruit · Matteo Pirotta · Alessandro Lazaric · Ronald Ortner -
2018 Poster: Improved Regret Bounds for Thompson Sampling in Linear Quadratic Control Problems »
Marc Abeille · Alessandro Lazaric -
2018 Oral: Improved Regret Bounds for Thompson Sampling in Linear Quadratic Control Problems »
Marc Abeille · Alessandro Lazaric -
2018 Oral: Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning »
Ronan Fruit · Matteo Pirotta · Alessandro Lazaric · Ronald Ortner