Timezone: »

Statistically Efficient Off-Policy Policy Gradients
Nathan Kallus · Masatoshi Uehara

Wed Jul 15 03:00 PM -- 03:45 PM & Thu Jul 16 04:00 AM -- 04:45 AM (PDT) @

Policy gradient methods in reinforcement learning update policy parameters by taking steps in the direction of an estimated gradient of policy value. In this paper, we consider the efficient estimation of policy gradients from off-policy data, where the estimation is particularly non-trivial. We derive the asymptotic lower bound on the feasible mean-squared error in both Markov and non-Markov decision processes and show that existing estimators fail to achieve it in general settings. We propose a meta-algorithm that achieves the lower bound without any parametric assumptions and exhibits a unique 4-way double robustness property. We discuss how to estimate nuisances that the algorithm relies on. Finally, we establish guarantees at the rate at which we approach a stationary point when we take steps in the direction of our new estimated policy gradient.

Author Information

Nathan Kallus (Cornell University)
Masatoshi Uehara (Harvard University)

More from the Same Authors