Timezone: »

Mean Field Multi-Agent Reinforcement Learning
Yaodong Yang · Rui Luo · Minne Li · Ming Zhou · Weinan Zhang · Jun Wang

Fri Jul 13 08:00 AM -- 08:20 AM (PDT) @ A1

Existing multi-agent reinforcement learning methods are limited typically to a small number of agents. When the agent number increases largely, the learning becomes intractable due to the curse of the dimensionality and the exponential growth of agent interactions. In this paper, we present Mean Field Reinforcement Learning where the interactions within the population of agents are approximated by those between a single agent and the average effect from the overall population or neighboring agents; the interplay between the two entities is mutually reinforced: the learning of the individual agent's optimal policy depends on the dynamics of the population, while the dynamics of the population change according to the collective patterns of the individual policies. We develop practical mean field Q-learning and mean field Actor-Critic algorithms and analyze the convergence of the solution to Nash equilibrium. Experiments on Gaussian squeeze, Ising model, and battle games justify the learning effectiveness of our mean field approaches. In addition, we report the first result to solve the Ising model via model-free reinforcement learning methods.

Author Information

Yaodong Yang (University College London)
Rui Luo (UCL)
M. Li (University College London)
Ming Zhou (Shanghai Jiao Tong University)
Weinan Zhang (Shanghai Jiao Tong University)
Jun Wang (UCL)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors