Timezone: »

For Learning in Symmetric Teams, Local Optima are Global Nash Equilibria
Scott Emmons · Caspar Oesterheld · Andrew Critch · Vincent Conitzer · Stuart Russell

Wed Jul 20 03:30 PM -- 05:30 PM (PDT) @ Hall E #1115

Although it has been known since the 1970s that a \textit{globally} optimal strategy profile in a common-payoff game is a Nash equilibrium, global optimality is a strict requirement that limits the result's applicability. In this work, we show that any \textit{locally} optimal symmetric strategy profile is also a (global) Nash equilibrium. Furthermore, we show that this result is robust to perturbations to the common payoff and to the local optimum. Applied to machine learning, our result provides a global guarantee for any gradient method that finds a local optimum in symmetric strategy space. While this result indicates stability to \textit{unilateral} deviation, we nevertheless identify broad classes of games where mixed local optima are unstable under \textit{joint}, asymmetric deviations. We analyze the prevalence of instability by running learning algorithms in a suite of symmetric games, and we conclude by discussing the applicability of our results to multi-agent RL, cooperative inverse RL, and decentralized POMDPs.

Author Information

Scott Emmons (UC Berkeley)
Caspar Oesterheld (Carnegie Mellon University)
Andrew Critch (UC Berkeley)
Vincent Conitzer (Duke)
Stuart Russell (UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors