Timezone: »

Learning in Nonzero-Sum Stochastic Games with Potentials
David Mguni · Yutong Wu · Yali Du · Yaodong Yang · Ziyi Wang · Minne Li · Ying Wen · Joel Jennings · Jun Wang

Wed Jul 21 07:25 AM -- 07:30 AM (PDT) @
Multi-agent reinforcement learning (MARL) has become effective in tackling discrete cooperative game scenarios. However, MARL has yet to penetrate settings beyond those modelled by team and zero-sum games, confining it to a small subset of multi-agent systems. In this paper, we introduce a new generation of MARL learners that can handle \textit{nonzero-sum} payoff structures and continuous settings. In particular, we study the MARL problem in a class of games known as stochastic potential games (SPGs) with continuous state-action spaces. Unlike cooperative games, in which all agents share a common reward, SPGs are capable of modelling real-world scenarios where agents seek to fulfil their individual goals. We prove theoretically our learning method, $\ourmethod$, enables independent agents to learn Nash equilibrium strategies in \textit{polynomial time}. We demonstrate our framework tackles previously unsolvable tasks such as \textit{Coordination Navigation} and \textit{large selfish routing games} and that it outperforms the state of the art MARL baselines such as MADDPG and COMIX in such scenarios.

Author Information

David Mguni (Noah's Ark Laboratory, Huawei)
Yutong Wu (Institute of Automation, Chinese Academy of Sciences)
Yali Du (University College London)

Yali Du is a 3rd year PhD student with her research focusing on matrix completion and its applications on recommender systems, multi-label learning and social analysis. She has the enthusiasm to communicate with other researchers and learn from them. She has published two full-length papers on IJCAI 2017.

Yaodong Yang (Huawei)
Ziyi Wang (Peking University)
Minne Li (University College London)
Ying Wen (Shanghai Jiao Tong University)
Joel Jennings (Huawei)
Jun Wang (UCL)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors