Timezone: »
Learning to coordinate between multiple agents is an important problem in many reinforcement learning problems. Key to learning to coordinate is exploiting loose couplings, i.e., conditional independences between agents. In this paper we study learning in repeated fully cooperative games, multi-agent multi-armed bandits (MAMABs), in which the expected rewards can be expressed as a coordination graph. We propose multi-agent upper confidence exploration (MAUCE), a new algorithm for MAMABs that exploits loose couplings, which enables us to prove a regret bound that is logarithmic in the number of arm pulls and only linear in the number of agents. We empirically compare MAUCE to sparse cooperative Q-learning, and a state-of-the-art combinatorial bandit approach, and show that it performs much better on a variety of settings, including learning control policies for wind farms.
Author Information
Eugenio Bargiacchi (Vrije Universiteit Brussel)
Timothy Verstraeten (Vrije Universiteit Brussel)
Diederik Roijers (VUB)
Ann Nowé (Vrije Universiteit Brussel)
Hado van Hasselt (DeepMind)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Learning to Coordinate with Coordination Graphs in Repeated Single-Stage Multi-Agent Decision Problems »
Thu. Jul 12th 04:15 -- 07:00 PM Room Hall B #126
More from the Same Authors
-
2021 Poster: Emphatic Algorithms for Deep Reinforcement Learning »
Ray Jiang · Tom Zahavy · Zhongwen Xu · Adam White · Matteo Hessel · Charles Blundell · Hado van Hasselt -
2021 Spotlight: Emphatic Algorithms for Deep Reinforcement Learning »
Ray Jiang · Tom Zahavy · Zhongwen Xu · Adam White · Matteo Hessel · Charles Blundell · Hado van Hasselt -
2021 Poster: Muesli: Combining Improvements in Policy Optimization »
Matteo Hessel · Ivo Danihelka · Fabio Viola · Arthur Guez · Simon Schmitt · Laurent Sifre · Theophane Weber · David Silver · Hado van Hasselt -
2021 Spotlight: Muesli: Combining Improvements in Policy Optimization »
Matteo Hessel · Ivo Danihelka · Fabio Viola · Arthur Guez · Simon Schmitt · Laurent Sifre · Theophane Weber · David Silver · Hado van Hasselt -
2020 Poster: What Can Learned Intrinsic Rewards Capture? »
Zeyu Zheng · Junhyuk Oh · Matteo Hessel · Zhongwen Xu · Manuel Kroiss · Hado van Hasselt · David Silver · Satinder Singh -
2019 Poster: Dynamic Weights in Multi-Objective Deep Reinforcement Learning »
Axel Abels · Diederik Roijers · Tom Lenaerts · Ann Nowé · Denis Steckelmacher -
2019 Oral: Dynamic Weights in Multi-Objective Deep Reinforcement Learning »
Axel Abels · Diederik Roijers · Tom Lenaerts · Ann Nowé · Denis Steckelmacher -
2017 Poster: The Predictron: End-To-End Learning and Planning »
David Silver · Hado van Hasselt · Matteo Hessel · Tom Schaul · Arthur Guez · Tim Harley · Gabriel Dulac-Arnold · David Reichert · Neil Rabinowitz · Andre Barreto · Thomas Degris -
2017 Talk: The Predictron: End-To-End Learning and Planning »
David Silver · Hado van Hasselt · Matteo Hessel · Tom Schaul · Arthur Guez · Tim Harley · Gabriel Dulac-Arnold · David Reichert · Neil Rabinowitz · Andre Barreto · Thomas Degris