Poster
in
Workshop: New Frontiers in Learning, Control, and Dynamical Systems
Stability of Multi-Agent Learning: Convergence in Network Games with Many Players
Aamal Hussain · Dan Leonte · Francesco Belardinelli · Georgios Piliouras
The behaviour of multi-agent learning in many player games has been shown to display complex dynamics outside of restrictive examples such as network zero-sum games. In addition, it has been shown that convergent behaviour is less likely to occur as the number of players increase. To make progress in resolving this problem, we study Q-Learning dynamics and determine a sufficient condition for the dynamics to converge to a unique equilibrium in any network game. We find that this condition depends on the nature of pairwise interactions and on the network structure, but is explicitly independent of the total number of agents in the game. We evaluate this result on a number of representative network games and show that, under suitable network conditions, stable learning dynamics can be achieved with an arbitrary number of agents.