Skip to yearly menu bar Skip to main content


Poster

Scalable Multi-Agent Reinforcement Learning through Intelligent Information Aggregation

Siddharth Nagar Nayak · Kenneth Choi · Wenqi Ding · Sydney Dolan · Karthik Gopalakrishnan · Hamsa Balakrishnan

Exhibit Hall 1 #740

Abstract:

We consider the problem of multi-agent navigation and collision avoidance when observations are limited to the local neighborhood of each agent. We propose InforMARL, a novel architecture for multi-agent reinforcement learning (MARL) which uses local information intelligently to compute paths for all the agents in a decentralized manner. Specifically, InforMARL aggregates information about the local neighborhood of agents for both the actor and the critic using a graph neural network and can be used in conjunction with any standard MARL algorithm. We show that (1) in training, InforMARL has better sample efficiency and performance than baseline approaches, despite using less information, and (2) in testing, it scales well to environments with arbitrary numbers of agents and obstacles. We illustrate these results using four task environments, including one with predetermined goals for each agent, and one in which the agents collectively try to cover all goals.

Chat is not available.