Offline Two-Player Zero-Sum Markov Games with KL Regularization
Claire Chen ⋅ Yuheng Zhang ⋅ Xinyu Liu ⋅ Zixuan Xie ⋅ Shuze Liu ⋅ Nan Jiang
Abstract
We study the problem of learning Nash equilibria in offline two-player zero-sum Markov games. While existing approaches often rely on explicit pessimism to address distribution shift, we show that KL regularization alone suffices to stabilize learning and guarantee convergence. We first introduce Regularized Offline Sequential Equilibrium (ROSE), a theoretical framework that achieves a fast $\widetilde{\mathcal{O}}(1/n)$ convergence rate under \textit{unilateral concentrability}, improving over the standard $\widetilde{\mathcal{O}}(1/\sqrt{n})$ rates in unregularized settings. We then propose Sequential Offline Self-play Mirror Descent (SOS-MD), a practical model-free algorithm based on least-squares value estimation and iterative self-play updates. We prove that SOS-MD attains the same $\widetilde{\mathcal{O}}(1/n)$ statistical rate with a linear iteration complexity.
Successful Page Load