Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Reinforcement Learning for Real Life

Multi-agent Deep Covering Option Discovery

Jiayu Chen · Marina W Haliem · Tian Lan · Vaneet Aggarwal


Abstract:

The use of options can greatly accelerate exploration in RL, especially when only sparse reward signals are available. While option discovery methods have been proposed for individual agents, in MARL settings, discovering collaborative options that can coordinate the behavior of multiple agents and encourage them to jointly visit under-explored regions of the state space has not been considered. In this paper, we propose a novel framework for multi-agent deep covering option discovery. Specifically, it first leverages an attention mechanism to find collaborative agent subgroups that would benefit most from coordination. Then, a hierarchical algorithm based on soft actor-critic, namely H-MSAC, is developed to learn the multi-agent options for each sub-group and then to integrate them through a high-level policy. This hierarchical option construction allows our framework to strike a balance between scalability and effective collaboration among the agents. The evaluation based on multi-agent collaborative tasks shows that the proposed algorithm can effectively capture agent interaction during learning and significantly outperforms prior works using single-agent options or no options, in terms of both faster exploration and higher task rewards.

Chat is not available.