Timezone: »

 
Model-Free Approach to Evaluate Reinforcement Learning Algorithms
Denis Belomestny · Ilya Levin · Eric Moulines · Alexey Naumov · Sergey Samsonov · Veronika Zorina
Policy evaluation is an important instrument for the comparison of different algorithms in Reinforcement Learning (RL). Yet even a precise knowledge of the value function $V^{\pi}$ corresponding to a policy $\pi$ does not provide reliable information on how far is the policy $\pi$ from the optimal one. We present a novel model-free upper value iteration procedure ({\sf UVIP}) that allows us to estimate the suboptimality gap $V^{\star}(x) - V^{\pi}(x)$ from above and to construct confidence intervals for $V^\star$. Our approach relies on upper bounds to the solution of the Bellman optimality equation via martingale approach. We provide theoretical guarantees for {\sf UVIP} under general assumptions and illustrate its performance on a number of benchmark RL problems.

Author Information

Denis Belomestny (Universitaet Duisburg-Essen)
Ilya Levin (National Research University "Higher School of Economics")
Eric Moulines (Ecole Polytechnique)
Alexey Naumov (National Research University Higher School of Economics)
Sergey Samsonov (National Research University Higher School of Economics)
Veronika Zorina (National Research University Higher School of Economics)

More from the Same Authors