Timezone: »
We propose a unified mechanism for achieving coordination and communication in Multi-Agent Reinforcement Learning (MARL), through rewarding agents for having causal influence over other agents' actions. Causal influence is assessed using counterfactual reasoning. At each timestep, an agent simulates alternate actions that it could have taken, and computes their effect on the behavior of other agents. Actions that lead to bigger changes in other agents' behavior are considered influential and are rewarded. We show that this is equivalent to rewarding agents for having high mutual information between their actions. Empirical results demonstrate that influence leads to enhanced coordination and communication in challenging social dilemma environments, dramatically increasing the learning curves of the deep RL agents, and leading to more meaningful learned communication protocols. The influence rewards for all agents can be computed in a decentralized way by enabling agents to learn a model of other agents using deep neural networks. In contrast, key previous works on emergent communication in the MARL setting were unable to learn diverse policies in a decentralized manner and had to resort to centralized training. Consequently, the influence reward opens up a window of new opportunities for research in this area.
Author Information
Natasha Jaques (MIT)
Angeliki Lazaridou (DeepMind)
Edward Hughes (DeepMind)
Caglar Gulcehre (DeepMind)
Pedro Ortega (DeepMind)
DJ Strouse (Princeton University)
Joel Z Leibo (DeepMind)
Nando de Freitas (DeepMind)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning »
Thu. Jun 13th 01:30 -- 04:00 AM Room Pacific Ballroom #31
More from the Same Authors
-
2023 : On the Universality of Linear Recurrences Followed by Nonlinear Projections »
Antonio Orvieto · Soham De · Razvan Pascanu · Caglar Gulcehre · Samuel Smith -
2023 Oral: Human-Timescale Adaptation in an Open-Ended Task Space »
Jakob Bauer · Kate Baumli · Feryal Behbahani · Avishkar Bhoopchand · Natalie Bradley-Schmieg · Michael Chang · Natalie Clay · Adrian Collister · Vibhavari Dasagi · Lucy Gonzalez · Karol Gregor · Edward Hughes · Sheleem Kashem · Maria Loks-Thompson · Hannah Openshaw · Jack Parker-Holder · Shreya Pathak · Nicolas Perez-Nieves · Nemanja Rakicevic · Tim Rocktäschel · Yannick Schroecker · Satinder Singh · Jakub Sygnowski · Karl Tuyls · Sarah York · Alexander Zacherl · Lei Zhang -
2023 Poster: Human-Timescale Adaptation in an Open-Ended Task Space »
Jakob Bauer · Kate Baumli · Feryal Behbahani · Avishkar Bhoopchand · Natalie Bradley-Schmieg · Michael Chang · Natalie Clay · Adrian Collister · Vibhavari Dasagi · Lucy Gonzalez · Karol Gregor · Edward Hughes · Sheleem Kashem · Maria Loks-Thompson · Hannah Openshaw · Jack Parker-Holder · Shreya Pathak · Nicolas Perez-Nieves · Nemanja Rakicevic · Tim Rocktäschel · Yannick Schroecker · Satinder Singh · Jakub Sygnowski · Karl Tuyls · Sarah York · Alexander Zacherl · Lei Zhang -
2022 Poster: StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models »
Adam Liska · Tomas Kocisky · Elena Gribovskaya · Tayfun Terzi · Eren Sezener · Devang Agrawal · Cyprien de Masson d'Autume · Tim Scholtes · Manzil Zaheer · Susannah Young · Ellen Gilsenan-McMahon · Sophia Austin · Phil Blunsom · Angeliki Lazaridou -
2022 Spotlight: StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models »
Adam Liska · Tomas Kocisky · Elena Gribovskaya · Tayfun Terzi · Eren Sezener · Devang Agrawal · Cyprien de Masson d'Autume · Tim Scholtes · Manzil Zaheer · Susannah Young · Ellen Gilsenan-McMahon · Sophia Austin · Phil Blunsom · Angeliki Lazaridou -
2021 Poster: From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization »
Julien Perolat · Remi Munos · Jean-Baptiste Lespiau · Shayegan Omidshafiei · Mark Rowland · Pedro Ortega · Neil Burch · Thomas Anthony · David Balduzzi · Bart De Vylder · Georgios Piliouras · Marc Lanctot · Karl Tuyls -
2021 Spotlight: From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization »
Julien Perolat · Remi Munos · Jean-Baptiste Lespiau · Shayegan Omidshafiei · Mark Rowland · Pedro Ortega · Neil Burch · Thomas Anthony · David Balduzzi · Bart De Vylder · Georgios Piliouras · Marc Lanctot · Karl Tuyls -
2021 Poster: Scalable Evaluation of Multi-Agent Reinforcement Learning with Melting Pot »
Joel Z Leibo · Edgar Duenez-Guzman · Alexander Vezhnevets · John Agapiou · Peter Sunehag · Raphael Koster · Jayd Matyas · Charles Beattie · Igor Mordatch · Thore Graepel -
2021 Oral: Scalable Evaluation of Multi-Agent Reinforcement Learning with Melting Pot »
Joel Z Leibo · Edgar Duenez-Guzman · Alexander Vezhnevets · John Agapiou · Peter Sunehag · Raphael Koster · Jayd Matyas · Charles Beattie · Igor Mordatch · Thore Graepel -
2020 : Invited Talk: Angeliki Lazaridou »
Angeliki Lazaridou -
2020 Poster: Stabilizing Transformers for Reinforcement Learning »
Emilio Parisotto · Francis Song · Jack Rae · Razvan Pascanu · Caglar Gulcehre · Siddhant Jayakumar · Max Jaderberg · Raphael Lopez Kaufman · Aidan Clark · Seb Noury · Matthew Botvinick · Nicolas Heess · Raia Hadsell -
2020 Poster: OPtions as REsponses: Grounding behavioural hierarchies in multi-agent reinforcement learning »
Alexander Vezhnevets · Yuhuai Wu · Maria Eckstein · Rémi Leblond · Joel Z Leibo -
2020 Poster: Improving the Gating Mechanism of Recurrent Neural Networks »
Albert Gu · Caglar Gulcehre · Thomas Paine · Matthew Hoffman · Razvan Pascanu -
2019 : Multi-agent communication from raw perceptual input: what works, what doesn't and what's next »
Angeliki Lazaridou -
2019 Poster: Bayesian Action Decoder for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Francis Song · Edward Hughes · Neil Burch · Iain Dunning · Shimon Whiteson · Matthew Botvinick · Michael Bowling -
2019 Oral: Bayesian Action Decoder for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Francis Song · Edward Hughes · Neil Burch · Iain Dunning · Shimon Whiteson · Matthew Botvinick · Michael Bowling -
2017 Poster: Learning to Learn without Gradient Descent by Gradient Descent »
Yutian Chen · Matthew Hoffman · Sergio Gómez Colmenarejo · Misha Denil · Timothy Lillicrap · Matthew Botvinick · Nando de Freitas -
2017 Poster: Learned Optimizers that Scale and Generalize »
Olga Wichrowska · Niru Maheswaranathan · Matthew Hoffman · Sergio Gómez Colmenarejo · Misha Denil · Nando de Freitas · Jascha Sohl-Dickstein -
2017 Poster: Sequence Tutor: Conservative fine-tuning of sequence generation models with KL-control »
Natasha Jaques · Shixiang Gu · Dzmitry Bahdanau · Jose Miguel Hernandez-Lobato · Richard E Turner · Douglas Eck -
2017 Talk: Sequence Tutor: Conservative fine-tuning of sequence generation models with KL-control »
Natasha Jaques · Shixiang Gu · Dzmitry Bahdanau · Jose Miguel Hernandez-Lobato · Richard E Turner · Douglas Eck -
2017 Poster: Parallel Multiscale Autoregressive Density Estimation »
Scott Reed · Aäron van den Oord · Nal Kalchbrenner · Sergio Gómez Colmenarejo · Ziyu Wang · Yutian Chen · Dan Belov · Nando de Freitas -
2017 Talk: Learned Optimizers that Scale and Generalize »
Olga Wichrowska · Niru Maheswaranathan · Matthew Hoffman · Sergio Gómez Colmenarejo · Misha Denil · Nando de Freitas · Jascha Sohl-Dickstein -
2017 Talk: Learning to Learn without Gradient Descent by Gradient Descent »
Yutian Chen · Matthew Hoffman · Sergio Gómez Colmenarejo · Misha Denil · Timothy Lillicrap · Matthew Botvinick · Nando de Freitas -
2017 Talk: Parallel Multiscale Autoregressive Density Estimation »
Scott Reed · Aäron van den Oord · Nal Kalchbrenner · Sergio Gómez Colmenarejo · Ziyu Wang · Yutian Chen · Dan Belov · Nando de Freitas