Abstract:
We study the problem of multi-agent reinforcement learning (MARL) with adaptivity constraints --- a new problem motivated by real-world applications where deployments of new policies are costly and the number of policy updates must be minimized. For two-player zero-sum Markov Games, we design a (policy) elimination based algorithm that achieves a regret of , while the batch complexity is only . In the above, denotes the number of states, are the number of actions for the two players respectively, is the horizon and is the number of episodes. Furthermore, we prove a batch complexity lower bound for all algorithms with regret bound, which matches our upper bound up to logarithmic factors. As a byproduct, our techniques naturally extend to learning bandit games and reward-free MARL within near optimal batch complexity. To the best of our knowledge, these are the first line of results towards understanding MARL with low adaptivity.
Chat is not available.