Timezone: »

Stochastic Variance Reduction Methods for Policy Evaluation
Simon Du · Jianshu Chen · Lihong Li · Lin Xiao · Dengyong Zhou

Mon Aug 07 05:48 PM -- 06:06 PM (PDT) @ C4.5

Policy evaluation is concerned with estimating the value function that predicts long-term values of states under a given policy. It is a crucial step in many reinforcement-learning algorithms. In this paper, we focus on policy evaluation with linear function approximation over a fixed dataset. We first transform the empirical policy evaluation problem into a (quadratic) convex-concave saddle-point problem, and then present a primal-dual batch gradient method, as well as two stochastic variance reduction methods for solving the problem. These algorithms scale linearly in both sample size and feature dimension. Moreover, they achieve linear convergence even when the saddle-point problem has only strong concavity in the dual variables but no strong convexity in the primal variables. Numerical experiments on benchmark problems demonstrate the effectiveness of our methods.

Author Information

Simon Du (Carnegie Mellon University)
Jianshu Chen (Microsoft Research)
Lihong Li (Microsoft Research)
Lin Xiao (Microsoft Research)
Dengyong Zhou (Microsoft Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors