Timezone: »
Poster
Reinforcement Learning with General Utilities: Simpler Variance Reduction and Large State-Action Space
Anas Barakat · Ilyas Fatkhullin · Niao He
We consider the reinforcement learning (RL) problem with general utilities which consists in maximizing a function of the state-action occupancy measure. Beyond the standard cumulative reward RL setting, this problem includes as particular cases constrained RL, pure exploration and learning from demonstrations among others. For this problem, we propose a simpler single-loop parameter-free normalized policy gradient algorithm. Implementing a recursive momentum variance reduction mechanism, our algorithm achieves $\tilde{\mathcal{O}}(\epsilon^{-3})$ and $\tilde{\mathcal{O}}(\epsilon^{-2})$ sample complexities for $\epsilon$-first-order stationarity and $\epsilon$-global optimality respectively, under adequate assumptions. We further address the setting of large finite state action spaces via linear function approximation of the occupancy measure and show a $\tilde{\mathcal{O}}(\epsilon^{-4})$ sample complexity for a simple policy gradient method with a linear regression subroutine.
Author Information
Anas Barakat (ETH Zurich)
Ilyas Fatkhullin (ETH Zurich)
Niao He (ETH Zurich)
More from the Same Authors
-
2021 : Sample Complexity and Overparameterization Bounds for Temporal Difference Learning with Neural Network Approximation »
Semih Cayci · Siddhartha Satpathi · Niao He · R Srikant -
2021 : EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback »
Peter Richtarik · Peter Richtarik · Ilyas Fatkhullin -
2021 : Linear Convergence of Entropy-Regularized Natural Policy Gradient with Linear Function Approximation »
Semih Cayci · Niao He · R Srikant -
2023 : Momentum Provably Improves Error Feedback! »
Ilyas Fatkhullin · Alexander Tyurin · Peter Richtarik -
2023 Poster: Stochastic Policy Gradient Methods: Improved Sample Complexity for Fisher-non-degenerate Policies »
Ilyas Fatkhullin · Anas Barakat · Anastasia Kireeva · Niao He -
2023 Poster: Policy Mirror Ascent for Efficient and Independent Learning in Mean Field Games »
Batuhan Yardim · Semih Cayci · Matthieu Geist · Niao He -
2022 Poster: 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation »
Peter Richtarik · Igor Sokolov · Elnur Gasanov · Ilyas Fatkhullin · Zhize Li · Eduard Gorbunov -
2022 Spotlight: 3PC: Three Point Compressors for Communication-Efficient Distributed Training and a Better Theory for Lazy Aggregation »
Peter Richtarik · Igor Sokolov · Elnur Gasanov · Ilyas Fatkhullin · Zhize Li · Eduard Gorbunov -
2022 Poster: A Natural Actor-Critic Framework for Zero-Sum Markov Games »
Ahmet Alacaoglu · Luca Viano · Niao He · Volkan Cevher -
2022 Spotlight: A Natural Actor-Critic Framework for Zero-Sum Markov Games »
Ahmet Alacaoglu · Luca Viano · Niao He · Volkan Cevher -
2021 Workshop: Workshop on Reinforcement Learning Theory »
Shipra Agrawal · Simon Du · Niao He · Csaba Szepesvari · Lin Yang