Skip to yearly menu bar Skip to main content


Model-Free Robust Average-Reward Reinforcement Learning

Yue Wang · Alvaro Velasquez · George Atia · Ashley Prater-Bennette · Shaofeng Zou

Exhibit Hall 1 #538
[ ]
[ PDF [ Poster


Robust Markov decision processes (MDPs) address the challenge of model uncertainty by optimizing the worst-case performance over an uncertainty set of MDPs. In this paper, we focus on the robust average-reward MDPs under the model-free setting. We first theoretically characterize the structure of solutions to the robust average-reward Bellman equation, which is essential for our later convergence analysis. We then design two model-free algorithms, robust relative value iteration (RVI) TD and robust RVI Q-learning, and theoretically prove their convergence to the optimal solution. We provide several widely used uncertainty sets as examples, including those defined by the contamination model, total variation, Chi-squared divergence, Kullback-Leibler (KL) divergence, and Wasserstein distance.

Chat is not available.