Timezone: »

More Robust Doubly Robust Off-policy Evaluation
Mehrdad Farajtabar · Yinlam Chow · Mohammad Ghavamzadeh

Wed Jul 11 09:15 AM -- 12:00 PM (PDT) @ Hall B #62

We study the problem of off-policy evaluation (OPE) in reinforcement learning (RL), where the goal is to estimate the performance of a policy from the data generated by another policy(ies). In particular, we focus on the doubly robust (DR) estimators that consist of an importance sampling (IS) component and a performance model, and utilize the low (or zero) bias of IS and low variance of the model at the same time. Although the accuracy of the model has a huge impact on the overall performance of DR, most of the work on using the DR estimators in OPE has been focused on improving the IS part, and not much on how to learn the model. In this paper, we propose alternative DR estimators, called more robust doubly robust (MRDR), that learn the model parameter by minimizing the variance of the DR estimator. We first present a formulation for learning the DR model in RL. We then derive formulas for the variance of the DR estimator in both contextual bandits and RL, such that their gradients w.r.t. the model parameters can be estimated from the samples, and propose methods to efficiently minimize the variance. We prove that the MRDR estimators are strongly consistent and asymptotically optimal. Finally, we evaluate MRDR in bandits and RL benchmark problems, and compare its performance with the existing methods.

Author Information

Mehrdad Farajtabar (Georgia Tech)
Yinlam Chow (DeepMind)
Mohammad Ghavamzadeh (Facebook AI Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors