Skip to yearly menu bar Skip to main content


Poster

Individual Contributions as Intrinsic Exploration Scaffolds for Multi-agent Reinforcement Learning

Xinran Li · Zifan LIU · Shibo Chen · Jun Zhang

Hall C 4-9 #1215
[ ] [ Paper PDF ]
[ Slides [ Poster
Thu 25 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract:

In multi-agent reinforcement learning (MARL), effective exploration is critical, especially in sparse reward environments. Although introducing global intrinsic rewards can foster exploration in such settings, it often complicates credit assignment among agents. To address this difficulty, we propose Individual Contributions as intrinsic Exploration Scaffolds (ICES), a novel approach to motivate exploration by assessing each agent's contribution from a global view. In particular, ICES constructs exploration scaffolds with Bayesian surprise, leveraging global transition information during centralized training. These scaffolds, used only in training, help to guide individual agents towards actions that significantly impact the global latent state transitions. Additionally, ICES separates exploration policies from exploitation policies, enabling the former to utilize privileged global information during training. Extensive experiments on cooperative benchmark tasks with sparse rewards, including Google Research Football (GRF) and StarCraft Multi-agent Challenge (SMAC), demonstrate that ICES exhibits superior exploration capabilities compared with baselines. The code is publicly available at https://github.com/LXXXXR/ICES.

Chat is not available.