Timezone: »

 
Poster
Bootstrapping Fitted Q-Evaluation for Off-Policy Inference
Botao Hao · Xiang Ji · Yaqi Duan · Hao Lu · Csaba Szepesvari · Mengdi Wang

Wed Jul 21 09:00 AM -- 11:00 AM (PDT) @

Bootstrapping provides a flexible and effective approach for assessing the quality of batch reinforcement learning, yet its theoretical properties are poorly understood. In this paper, we study the use of bootstrapping in off-policy evaluation (OPE), and in particular, we focus on the fitted Q-evaluation (FQE) that is known to be minimax-optimal in the tabular and linear-model cases. We propose a bootstrapping FQE method for inferring the distribution of the policy evaluation error and show that this method is asymptotically efficient and distributionally consistent for off-policy statistical inference. To overcome the computation limit of bootstrapping, we further adapt a subsampling procedure that improves the runtime by an order of magnitude. We numerically evaluate the bootrapping method in classical RL environments for confidence interval estimation, estimating the variance of off-policy evaluator, and estimating the correlation between multiple off-policy evaluators.

Author Information

Botao Hao (Deepmind)
Xiang Ji (Princeton University)
Yaqi Duan (Princeton University)
Hao Lu (Princeton University)
Csaba Szepesvari (DeepMind/University of Alberta)
Mengdi Wang (Princeton University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors