Oral

Cooperative Exploration for Multi-Agent Deep Reinforcement Learning

Iou-Jen Liu · Unnat Jain · Raymond Yeh · Alex Schwing

[ Abstract ] [ Livestream: Visit Deep Reinforcement Learning 3 ] [ Paper ]
Tue 20 Jul 7 p.m. — 7:20 p.m. PDT

Exploration is critical for good results in deep reinforcement learning and has attracted much attention. However, existing multi-agent deep reinforcement learning algorithms still use mostly noise-based techniques. Very recently, exploration methods that consider cooperation among multiple agents have been developed. However, existing methods suffer from a common challenge: agents struggle to identify states that are worth exploring, and hardly coordinate exploration efforts toward those states. To address this shortcoming, in this paper, we propose cooperative multi-agent exploration (CMAE): agents share a common goal while exploring. The goal is selected from multiple projected state spaces by a normalized entropy-based technique. Then, agents are trained to reach the goal in a coordinated manner. We demonstrate that CMAE consistently outperforms baselines on various tasks, including a sparse-reward version of multiple-particle environment (MPE) and the Starcraft multi-agent challenge (SMAC).

Chat is not available.