Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Decision Awareness in Reinforcement Learning

Leader-based Decision Learning for Cooperative Multi-Agent Reinforcement Learning

Wenqi Chen · Xin Zeng · Amber Li


Abstract:

A leader in the team enables efficient learning for other novices in the social learning setting for both humans and animals. This paper constructs the leader-based decision learning framework for Multi-Agent Reinforcement Learning and investigates whether the leader enables the learning of novices as well. We compare three different approaches to distilling a leader's experiences: Linear Layer Dimension Reduction, Attentive Graph Pooling, and Attention-based Graph Neural Network. We successfully show that a leader-based decision learning can 1) enable agents to learn faster, cooperate more effectively, and escape local optimum, and 2) promote the generalizability of agents in more challenging and unseen environments. The key to effective distillation is to maintain and aggregate important information.

Chat is not available.