Skip to yearly menu bar Skip to main content


Spotlight

For Learning in Symmetric Teams, Local Optima are Global Nash Equilibria

Scott Emmons · Caspar Oesterheld · Andrew Critch · Vincent Conitzer · Stuart Russell

Room 310
[ ] [ Livestream: Visit T: Game Theory/RL/Planning ]

Abstract:

Although it has been known since the 1970s that a \textit{globally} optimal strategy profile in a common-payoff game is a Nash equilibrium, global optimality is a strict requirement that limits the result's applicability. In this work, we show that any \textit{locally} optimal symmetric strategy profile is also a (global) Nash equilibrium. Furthermore, we show that this result is robust to perturbations to the common payoff and to the local optimum. Applied to machine learning, our result provides a global guarantee for any gradient method that finds a local optimum in symmetric strategy space. While this result indicates stability to \textit{unilateral} deviation, we nevertheless identify broad classes of games where mixed local optima are unstable under \textit{joint}, asymmetric deviations. We analyze the prevalence of instability by running learning algorithms in a suite of symmetric games, and we conclude by discussing the applicability of our results to multi-agent RL, cooperative inverse RL, and decentralized POMDPs.

Chat is not available.