Spotlight
On Well-posedness and Minimax Optimal Rates of Nonparametric Q-function Estimation in Off-policy Evaluation
Xiaohong Chen · Zhengling Qi
Room 327 - 329
Abstract:
We study the off-policy evaluation (OPE) problem in an infinite-horizon Markov decision process with continuous states and actions. We recast the -function estimation into a special form of the nonparametric instrumental variables (NPIV) estimation problem. We first show that under one mild condition the NPIV formulation of -function estimation is well-posed in the sense of -measure of ill-posedness with respect to the data generating distribution, bypassing a strong assumption on the discount factor imposed in the recent literature for obtaining the convergence rates of various -function estimators. Thanks to this new well-posed property, we derive the first minimax lower bounds for the convergence rates of nonparametric estimation of -function and its derivatives in both sup-norm and -norm, which are shown to be the same as those for the classical nonparametric regression (Stone, 1982). We then propose a sieve two-stage least squares estimator and establish its rate-optimality in both norms under some mild conditions. Our general results on the well-posedness and the minimax lower bounds are of independent interest to study not only other nonparametric estimators for -function but also efficient estimation on the value of any target policy in off-policy settings.
Chat is not available.