Timezone: »

 
Talk
Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson

Tue Aug 08 09:06 PM -- 09:24 PM (PDT) @ C4.5

Many real-world problems, such as network packet routing and urban traffic control, are naturally modeled as multi-agent reinforcement learning (RL) problems. However, existing multi-agent RL methods typically scale poorly in the problem size. Therefore, a key challenge is to translate the success of deep learning on single-agent RL to the multi-agent setting. A major stumbling block is that independent Q-learning, the most popular multi-agent RL method, introduces nonstationarity that makes it incompatible with the experience replay memory on which deep Q-learning relies. This paper proposes two methods that address this problem: 1) using a multi-agent variant of importance sampling to naturally decay obsolete data and 2) conditioning each agent's value function on a fingerprint that disambiguates the age of the data sampled from the replay memory. Results on a challenging decentralised variant of StarCraft unit micromanagement confirm that these methods enable the successful combination of experience replay with multi-agent RL.

Author Information

Jakob Foerster (University of Oxford)
Nantas Nardelli (University of Oxford)
Gregory Farquhar (University of Oxford)
Triantafyllos Afouras (University of Oxford)
Phil Torr (Oxford)
Pushmeet Kohli (Microsoft Research)
Shimon Whiteson (University of Oxford)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors