Skip to yearly menu bar Skip to main content


Poster

Actor-Attention-Critic for Multi-Agent Reinforcement Learning

Shariq Iqbal · Fei Sha

Pacific Ballroom #59

Keywords: [ Deep Reinforcement Learning ] [ Multiagent Learning ]


Abstract:

Reinforcement learning in multi-agent scenarios is important for real-world applications but presents challenges beyond those seen in single-agent settings. We present an actor-critic algorithm that trains decentralized policies in multi-agent settings, using centrally computed critics that share an attention mechanism which selects relevant information for each agent at every timestep. This attention mechanism enables more effective and scalable learning in complex multi-agent environments, when compared to recent approaches. Our approach is applicable not only to cooperative settings with shared rewards, but also individualized reward settings, including adversarial settings, as well as settings that do not provide global states, and it makes no assumptions about the action spaces of the agents. As such, it is flexible enough to be applied to most multi-agent learning problems.

Live content is unavailable. Log in and register to view live content