Timezone: »
The policy gradient theorem (Sutton et al., 2000) prescribes the usage of a cumulative discounted state distribution under the target policy to approximate the gradient. Most algorithms based on this theorem, in practice, break this assumption, introducing a distribution shift that can cause the convergence to poor solutions. In this paper, we propose a new approach of reconstructing the policy gradient from the start state without requiring a particular sampling strategy. The policy gradient calculation in this form can be simplified in terms of a gradient critic, which can be recursively estimated due to a new Bellman equation of gradients. By using temporal-difference updates of the gradient critic from an off-policy data stream, we develop the first estimator that side-steps the distribution shift issue in a model-free way. We prove that, under certain realizability conditions, our estimator is unbiased regardless of the sampling strategy. We empirically show that our technique achieves a superior bias-variance trade-off and performance in presence of off-policy samples.
Author Information
Samuele Tosatto (University of Alberta)
Andrew Patterson (University of Alberta)
Martha White (University of Alberta)
A. Mahmood
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: A Temporal-Difference Approach to Policy Gradient Estimation »
Wed. Jul 20th through Thu the 21st Room Hall E #815
More from the Same Authors
-
2023 Poster: Trajectory-Aware Eligibility Traces for Off-Policy Reinforcement Learning »
Brett Daley · Martha White · Christopher Amato · Marlos C. Machado -
2020 : Panel Discussion »
Eric Eaton · Martha White · Doina Precup · Irina Rish · Harm van Seijen -
2020 : QA for invited talk 5 White »
Martha White -
2020 : Invited talk 5 White »
Martha White -
2020 : An Off-policy Policy Gradient Theorem: A Tale About Weightings - Martha White »
Martha White -
2020 : Speaker Panel »
Csaba Szepesvari · Martha White · Sham Kakade · Gergely Neu · Shipra Agrawal · Akshay Krishnamurthy -
2020 Poster: Gradient Temporal-Difference Learning with Regularized Corrections »
Sina Ghiassian · Andrew Patterson · Shivam Garg · Dhawal Gupta · Adam White · Martha White -
2020 Poster: Selective Dyna-style Planning Under Limited Model Capacity »
Zaheer Abbas · Samuel Sokota · Erin Talvitie · Martha White -
2020 Poster: Optimizing for the Future in Non-Stationary MDPs »
Yash Chandak · Georgios Theocharous · Shiv Shankar · Martha White · Sridhar Mahadevan · Philip Thomas -
2019 Workshop: Exploration in Reinforcement Learning Workshop »
Benjamin Eysenbach · Benjamin Eysenbach · Surya Bhupatiraju · Shixiang Gu · Harrison Edwards · Martha White · Pierre-Yves Oudeyer · Kenneth Stanley · Emma Brunskill -
2018 Poster: Reinforcement Learning with Function-Valued Action Spaces for Partial Differential Equation Control »
Yangchen Pan · Amir-massoud Farahmand · Martha White · Saleh Nabi · Piyush Grover · Daniel Nikovski -
2018 Oral: Reinforcement Learning with Function-Valued Action Spaces for Partial Differential Equation Control »
Yangchen Pan · Amir-massoud Farahmand · Martha White · Saleh Nabi · Piyush Grover · Daniel Nikovski -
2018 Poster: Improving Regression Performance with Distributional Losses »
Ehsan Imani · Martha White -
2018 Oral: Improving Regression Performance with Distributional Losses »
Ehsan Imani · Martha White