Distributionally Robust Markov Games with Average Reward
Abstract
We propose and study distributionally robust Markov games (DR‑MGs) with the average‑reward criterion as a crucial framework for multi-agent decision-making under model mismatches and over extended horizons. Under a standard irreducible assumption, we first derive a correspondence between the optimal policies and the solutions of the robust Bellman equation, based on which we further show the existence of a stationary Nash Equilibrium (NE) of the game. We further study DR-MGs under a more general weakly communicating setting. We construct a set-valued map based on the constant-gain optimal robust Bellman operator and show that its value is a subset of the best-response policies. We further prove that this map admits a fixed point, which implies the existence of NE. We then design two algorithms, Robust Nash‑Iteration and robust TD Descent, with provably convergent guarantees. Finally, we show that the NE under average‑reward can be approximated by the ones for the discounted DR-MGs as the discount factor approaches one. Our studies provide a comprehensive theoretical and algorithmic foundation for decision-making in complex, uncertain, and long-running multi-player environments.