Timezone: »

 
Spotlight
Average-Reward Off-Policy Policy Evaluation with Function Approximation
Shangtong Zhang · Yi Wan · Richard Sutton · Shimon Whiteson

Thu Jul 22 06:30 PM -- 06:35 PM (PDT) @

We consider off-policy policy evaluation with function approximation (FA) in average-reward MDPs, where the goal is to estimate both the reward rate and the differential value function. For this problem, bootstrapping is necessary and, along with off-policy learning and FA, results in the deadly triad (Sutton & Barto, 2018). To address the deadly triad, we propose two novel algorithms, reproducing the celebrated success of Gradient TD algorithms in the average-reward setting. In terms of estimating the differential value function, the algorithms are the first convergent off-policy linear function approximation algorithms. In terms of estimating the reward rate, the algorithms are the first convergent off-policy linear function approximation algorithms that do not require estimating the density ratio. We demonstrate empirically the advantage of the proposed algorithms, as well as their nonlinear variants, over a competitive density-ratio-based approach, in a simple domain as well as challenging robot simulation tasks.

Author Information

Shangtong Zhang (University of Oxford)
Yi Wan (University of Alberta)
Richard Sutton (DeepMind / Univ Alberta)
Shimon Whiteson (University of Oxford)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors