Skip to yearly menu bar Skip to main content


Poster

Double Reinforcement Learning for Efficient and Robust Off-Policy Evaluation

Nathan Kallus · Masatoshi Uehara

Keywords: [ Reinforcement Learning - Theory ] [ Reinforcement Learning Theory ] [ Causality ] [ Reinforcement Learning ]


Abstract: Off-policy evaluation (OPE) in reinforcement learning allows one to evaluate novel decision policies without needing to conduct exploration, which is often costly or otherwise infeasible. We consider for the first time the semiparametric efficiency limits of OPE in Markov decision processes (MDPs), where actions, rewards, and states are memoryless. We show existing OPE estimators may fail to be efficient in this setting. We develop a new estimator based on cross-fold estimation of $q$-functions and marginalized density ratios, which we term double reinforcement learning (DRL). We show that DRL is efficient when both components are estimated at fourth-root rates and is also doubly robust when only one component is consistent. We investigate these properties empirically and demonstrate the performance benefits due to harnessing memorylessness.

Chat is not available.