Timezone: »
We study the problem of off-policy evaluation (OPE) in reinforcement learning (RL), where the goal is to estimate the performance of a policy from the data generated by another policy(ies). In particular, we focus on the doubly robust (DR) estimators that consist of an importance sampling (IS) component and a performance model, and utilize the low (or zero) bias of IS and low variance of the model at the same time. Although the accuracy of the model has a huge impact on the overall performance of DR, most of the work on using the DR estimators in OPE has been focused on improving the IS part, and not much on how to learn the model. In this paper, we propose alternative DR estimators, called more robust doubly robust (MRDR), that learn the model parameter by minimizing the variance of the DR estimator. We first present a formulation for learning the DR model in RL. We then derive formulas for the variance of the DR estimator in both contextual bandits and RL, such that their gradients w.r.t. the model parameters can be estimated from the samples, and propose methods to efficiently minimize the variance. We prove that the MRDR estimators are strongly consistent and asymptotically optimal. Finally, we evaluate MRDR in bandits and RL benchmark problems, and compare its performance with the existing methods.
Author Information
Mehrdad Farajtabar (Georgia Tech)
Yinlam Chow (DeepMind)
Mohammad Ghavamzadeh (Facebook AI Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: More Robust Doubly Robust Off-policy Evaluation »
Wed. Jul 11th 09:50 -- 10:00 AM Room A1
More from the Same Authors
-
2022 : SAFER: Data-Efficient and Safe Reinforcement Learning via Skill Acquisition »
Dylan Slack · Yinlam Chow · Bo Dai · Nevan Wichers -
2020 : Panel discussion »
Neil Lawrence · Mohammad Ghavamzadeh · Leilani Gilpin · Huyen Nguyen · Ernest Mwebaze · Nevena Lalic -
2020 : Conservative Exploration in Bandits and Reinforcement Learning »
Mohammad Ghavamzadeh -
2020 Poster: Predictive Coding for Locally-Linear Control »
Rui Shu · Tung Nguyen · Yinlam Chow · Tuan Pham · Khoat Than · Mohammad Ghavamzadeh · Stefano Ermon · Hung Bui -
2020 Poster: Adaptive Sampling for Estimating Probability Distributions »
Shubhanshu Shekhar · Tara Javidi · Mohammad Ghavamzadeh -
2020 Poster: Multi-step Greedy Reinforcement Learning Algorithms »
Manan Tomar · Yonathan Efroni · Mohammad Ghavamzadeh -
2019 : panel discussion with Craig Boutilier (Google Research), Emma Brunskill (Stanford), Chelsea Finn (Google Brain, Stanford, UC Berkeley), Mohammad Ghavamzadeh (Facebook AI), John Langford (Microsoft Research) and David Silver (Deepmind) »
Peter Stone · Craig Boutilier · Emma Brunskill · Chelsea Finn · John Langford · David Silver · Mohammad Ghavamzadeh -
2019 : posters »
Zhengxing Chen · Juan Jose Garau Luis · Ignacio Albert Smet · Aditya Modi · Sabina Tomkins · Riley Simmons-Edler · Hongzi Mao · Alexander Irpan · Hao Lu · Rose Wang · Subhojyoti Mukherjee · Aniruddh Raghu · Syed Arbab Mohd Shihab · Byung Hoon Ahn · Rasool Fakoor · Pratik Chaudhari · Elena Smirnova · Min-hwan Oh · Xiaocheng Tang · Tony Qin · Qingyang Li · Marc Brittain · Ian Fox · Supratik Paul · Xiaofeng Gao · Yinlam Chow · Gabriel Dulac-Arnold · Ofir Nachum · Nikos Karampatziakis · Bharathan Balaji · Supratik Paul · Ali Davody · Djallel Bouneffouf · Himanshu Sahni · Soo Kim · Andrey Kolobov · Alexander Amini · Yao Liu · Xinshi Chen · · Craig Boutilier -
2019 Poster: Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits »
Branislav Kveton · Csaba Szepesvari · Sharan Vaswani · Zheng Wen · Tor Lattimore · Mohammad Ghavamzadeh -
2019 Oral: Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits »
Branislav Kveton · Csaba Szepesvari · Sharan Vaswani · Zheng Wen · Tor Lattimore · Mohammad Ghavamzadeh -
2018 Poster: Path Consistency Learning in Tsallis Entropy Regularized MDPs »
Yinlam Chow · Ofir Nachum · Mohammad Ghavamzadeh -
2018 Oral: Path Consistency Learning in Tsallis Entropy Regularized MDPs »
Yinlam Chow · Ofir Nachum · Mohammad Ghavamzadeh -
2017 Poster: Active Learning for Accurate Estimation of Linear Models »
Carlos Riquelme Ruiz · Mohammad Ghavamzadeh · Alessandro Lazaric -
2017 Poster: Model-Independent Online Learning for Influence Maximization »
Sharan Vaswani · Branislav Kveton · Zheng Wen · Mohammad Ghavamzadeh · Laks V.S Lakshmanan · Mark Schmidt -
2017 Poster: Online Learning to Rank in Stochastic Click Models »
Masrour Zoghi · Tomas Tunys · Mohammad Ghavamzadeh · Branislav Kveton · Csaba Szepesvari · Zheng Wen -
2017 Poster: Bottleneck Conditional Density Estimation »
Rui Shu · Hung Bui · Mohammad Ghavamzadeh -
2017 Poster: Fake News Mitigation via Point Process Based Intervention »
Mehrdad Farajtabar · Jiachen Yang · Xiaojing Ye · Huan Xu · Rakshit Trivedi · Elias Khalil · Shuang Li · Le Song · Hongyuan Zha -
2017 Talk: Active Learning for Accurate Estimation of Linear Models »
Carlos Riquelme Ruiz · Mohammad Ghavamzadeh · Alessandro Lazaric -
2017 Talk: Bottleneck Conditional Density Estimation »
Rui Shu · Hung Bui · Mohammad Ghavamzadeh -
2017 Talk: Fake News Mitigation via Point Process Based Intervention »
Mehrdad Farajtabar · Jiachen Yang · Xiaojing Ye · Huan Xu · Rakshit Trivedi · Elias Khalil · Shuang Li · Le Song · Hongyuan Zha -
2017 Talk: Online Learning to Rank in Stochastic Click Models »
Masrour Zoghi · Tomas Tunys · Mohammad Ghavamzadeh · Branislav Kveton · Csaba Szepesvari · Zheng Wen -
2017 Talk: Model-Independent Online Learning for Influence Maximization »
Sharan Vaswani · Branislav Kveton · Zheng Wen · Mohammad Ghavamzadeh · Laks V.S Lakshmanan · Mark Schmidt