Timezone: »

Global Convergence of Policy Gradient for Linear-Quadratic Mean-Field Control/Game in Continuous Time
Weichen Wang · Jiequn Han · Zhuoran Yang · Zhaoran Wang

Wed Jul 21 09:00 AM -- 11:00 AM (PDT) @

Recent years have witnessed the success of multi-agent reinforcement learning, which has motivated new research directions for mean-field control (MFC) and mean-field game (MFG), as the multi-agent system can be well approximated by a mean-field problem when the number of agents grows to be very large. In this paper, we study the policy gradient (PG) method for the linear-quadratic mean-field control and game, where we assume each agent has identical linear state transitions and quadratic cost functions. While most recent works on policy gradient for MFC and MFG are based on discrete-time models, we focus on a continuous-time model where some of our analyzing techniques could be valuable to the interested readers. For both the MFC and the MFG, we provide PG update and show that it converges to the optimal solution at a linear rate, which is verified by a synthetic simulation. For the MFG, we also provide sufficient conditions for the existence and uniqueness of the Nash equilibrium.

Author Information

Weichen Wang (The University of Hong Kong)
Jiequn Han (Princeton University)
Zhuoran Yang (Princeton University)
Zhaoran Wang (Northwestern)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors