Timezone: »

 
Spotlight
A Temporal-Difference Approach to Policy Gradient Estimation
Samuele Tosatto · Andrew Patterson · Martha White · A. Mahmood

Wed Jul 20 11:05 AM -- 11:10 AM (PDT) @ Room 307

The policy gradient theorem (Sutton et al., 2000) prescribes the usage of a cumulative discounted state distribution under the target policy to approximate the gradient. Most algorithms based on this theorem, in practice, break this assumption, introducing a distribution shift that can cause the convergence to poor solutions. In this paper, we propose a new approach of reconstructing the policy gradient from the start state without requiring a particular sampling strategy. The policy gradient calculation in this form can be simplified in terms of a gradient critic, which can be recursively estimated due to a new Bellman equation of gradients. By using temporal-difference updates of the gradient critic from an off-policy data stream, we develop the first estimator that side-steps the distribution shift issue in a model-free way. We prove that, under certain realizability conditions, our estimator is unbiased regardless of the sampling strategy. We empirically show that our technique achieves a superior bias-variance trade-off and performance in presence of off-policy samples.

Author Information

Samuele Tosatto (University of Alberta)
Andrew Patterson (University of Alberta)
Martha White (University of Alberta)
A. Mahmood

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors