Timezone: »

Learning and Planning in Average-Reward Markov Decision Processes
Yi Wan · Abhishek Naik · Richard Sutton

Tue Jul 20 09:00 PM -- 11:00 PM (PDT) @ Virtual

We introduce learning and planning algorithms for average-reward MDPs, including 1) the first general proven-convergent off-policy model-free control algorithm without reference states, 2) the first proven-convergent off-policy model-free prediction algorithm, and 3) the first off-policy learning algorithm that converges to the actual value function rather than to the value function plus an offset. All of our algorithms are based on using the temporal-difference error rather than the conventional error when updating the estimate of the average reward. Our proof techniques are a slight generalization of those by Abounadi, Bertsekas, and Borkar (2001). In experiments with an Access-Control Queuing Task, we show some of the difficulties that can arise when using methods that rely on reference states and argue that our new algorithms are significantly easier to use.

Author Information

Yi Wan (University of Alberta)
Abhishek Naik (University of Alberta; Amii)
Richard Sutton (DeepMind / Univ Alberta)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors