Timezone: »
We study the problem of zero-shot coordination (ZSC), where agents must independently produce strategies for a collaborative game that are compatible with novel partners not seen during training. Our first contribution is to consider the need for diversity in generating such agents. Because self-play (SP) agents control their own trajectory distribution during training, each policy typically only performs well on this exact distribution. As a result, they achieve low scores in ZSC, since playing with another agent is likely to put them in situations they have not encountered during training. To address this issue, we train a common best response (BR) to a population of agents, which we regulate to be diverse. To this end, we introduce \textit{Trajectory Diversity} (TrajeDi) -- a differentiable objective for generating diverse reinforcement learning policies. We derive TrajeDi as a generalization of the Jensen-Shannon divergence between policies and motivate it experimentally in two simple settings. We then focus on the collaborative card game Hanabi, demonstrating the scalability of our method and improving upon the cross-play scores of both independently trained SP agents and BRs to unregularized populations.
Author Information
Andrei Lupu (Mila, McGill University)
Brandon Cui (Facebook AI Research)
Hengyuan Hu (Facebook AI Research)
Jakob Foerster (Facebook AI Research)
Jakob Foerster started as an Associate Professor at the department of engineering science at the University of Oxford in the fall of 2021. During his PhD at Oxford he helped bring deep multi-agent reinforcement learning to the forefront of AI research and interned at Google Brain, OpenAI, and DeepMind. After his PhD he worked as a research scientist at Facebook AI Research in California, where he continued doing foundational work. He was the lead organizer of the first Emergent Communication workshop at NeurIPS in 2017, which he has helped organize ever since and was awarded a prestigious CIFAR AI chair in 2019. His past work addresses how AI agents can learn to cooperate and communicate with other agents, most recently he has been developing and addressing the zero-shot coordination problem setting, a crucial step towards human-AI coordination.
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Trajectory Diversity for Zero-Shot Coordination »
Wed. Jul 21st 04:00 -- 06:00 AM Room
More from the Same Authors
-
2021 : Off Belief Learning »
Hengyuan Hu -
2021 : K-level Reasoning for Zero-Shot Coordination in Hanabi »
Brandon Cui -
2021 : Self-Explaining Deviations for Zero-Shot Coordination »
Hengyuan Hu -
2021 : Randomized Least Squares Policy Optimization »
Haque Ishfaq · Zhuoran Yang · Andrei Lupu · Viet Nguyen · Lewis Liu · Riashat Islam · Zhaoran Wang · Doina Precup -
2021 : Learning Space Partitions for Path Planning »
Kevin Yang · Tianjun Zhang · Chris Cummins · Brandon Cui · Benoit Steiner · Linnan Wang · Joseph E Gonzalez · Dan Klein · Yuandong Tian -
2022 : Adversarial Cheap Talk »
Christopher Lu · Timon Willi · Alistair Letcher · Jakob Foerster -
2022 : Illusionary Attacks on Sequential Decision Makers and Countermeasures »
Tim Franzmeyer · Joao Henriques · Jakob Foerster · Phil Torr · Adel Bibi · Christian Schroeder -
2022 : Discovered Policy Optimisation »
Christopher Lu · Jakub Grudzien Kuba · Alistair Letcher · Luke Metz · Christian Schroeder · Jakob Foerster -
2022 : Adversarial Cheap Talk »
Christopher Lu · Timon Willi · Alistair Letcher · Jakob Foerster -
2022 : Adversarial Cheap Talk »
Christopher Lu · Timon Willi · Alistair Letcher · Jakob Foerster -
2022 : Adversarial Cheap Talk »
Christopher Lu · Timon Willi · Alistair Letcher · Jakob Foerster -
2023 Poster: Learning Intuitive Policies Using Action Features »
Mingwei Ma · Jizhou Liu · Samuel Sokota · Max Kleiman-Weiner · Jakob Foerster -
2023 Poster: Adversarial Cheap Talk »
Christopher Lu · Timon Willi · Alistair Letcher · Jakob Foerster -
2022 : Adversarial Cheap Talk »
Christopher Lu · Timon Willi · Alistair Letcher · Jakob Foerster -
2022 Poster: Evolving Curricula with Regret-Based Environment Design »
Jack Parker-Holder · Minqi Jiang · Michael Dennis · Mikayel Samvelyan · Jakob Foerster · Edward Grefenstette · Tim Rocktäschel -
2022 Poster: COLA: Consistent Learning with Opponent-Learning Awareness »
Timon Willi · Alistair Letcher · Johannes Treutlein · Jakob Foerster -
2022 Spotlight: Evolving Curricula with Regret-Based Environment Design »
Jack Parker-Holder · Minqi Jiang · Michael Dennis · Mikayel Samvelyan · Jakob Foerster · Edward Grefenstette · Tim Rocktäschel -
2022 Spotlight: COLA: Consistent Learning with Opponent-Learning Awareness »
Timon Willi · Alistair Letcher · Johannes Treutlein · Jakob Foerster -
2022 Poster: Modeling Strong and Human-Like Gameplay with KL-Regularized Search »
Athul Paul Jacob · David Wu · Gabriele Farina · Adam Lerer · Hengyuan Hu · Anton Bakhtin · Jacob Andreas · Noam Brown -
2022 Poster: Communicating via Markov Decision Processes »
Samuel Sokota · Christian Schroeder · Maximilian Igl · Luisa Zintgraf · Phil Torr · Martin Strohmeier · Zico Kolter · Shimon Whiteson · Jakob Foerster -
2022 Spotlight: Communicating via Markov Decision Processes »
Samuel Sokota · Christian Schroeder · Maximilian Igl · Luisa Zintgraf · Phil Torr · Martin Strohmeier · Zico Kolter · Shimon Whiteson · Jakob Foerster -
2022 Spotlight: Modeling Strong and Human-Like Gameplay with KL-Regularized Search »
Athul Paul Jacob · David Wu · Gabriele Farina · Adam Lerer · Hengyuan Hu · Anton Bakhtin · Jacob Andreas · Noam Brown -
2022 Poster: Model-Free Opponent Shaping »
Christopher Lu · Timon Willi · Christian Schroeder de Witt · Jakob Foerster -
2022 Poster: Mirror Learning: A Unifying Framework of Policy Optimisation »
Jakub Grudzien Kuba · Christian Schroeder de Witt · Jakob Foerster -
2022 Poster: Generalized Beliefs for Cooperative AI »
Darius Muglich · Luisa Zintgraf · Christian Schroeder de Witt · Shimon Whiteson · Jakob Foerster -
2022 Spotlight: Generalized Beliefs for Cooperative AI »
Darius Muglich · Luisa Zintgraf · Christian Schroeder de Witt · Shimon Whiteson · Jakob Foerster -
2022 Spotlight: Model-Free Opponent Shaping »
Christopher Lu · Timon Willi · Christian Schroeder de Witt · Jakob Foerster -
2022 Spotlight: Mirror Learning: A Unifying Framework of Policy Optimisation »
Jakub Grudzien Kuba · Christian Schroeder de Witt · Jakob Foerster -
2021 Poster: Off-Belief Learning »
Hengyuan Hu · Adam Lerer · Brandon Cui · Luis Pineda · Noam Brown · Jakob Foerster -
2021 Spotlight: Off-Belief Learning »
Hengyuan Hu · Adam Lerer · Brandon Cui · Luis Pineda · Noam Brown · Jakob Foerster -
2021 Poster: A New Formalism, Method and Open Issues for Zero-Shot Coordination »
Johannes Treutlein · Michael Dennis · Caspar Oesterheld · Jakob Foerster -
2021 Spotlight: A New Formalism, Method and Open Issues for Zero-Shot Coordination »
Johannes Treutlein · Michael Dennis · Caspar Oesterheld · Jakob Foerster -
2020 Poster: “Other-Play” for Zero-Shot Coordination »
Hengyuan Hu · Alexander Peysakhovich · Adam Lerer · Jakob Foerster -
2019 Poster: Bayesian Action Decoder for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Francis Song · Edward Hughes · Neil Burch · Iain Dunning · Shimon Whiteson · Matthew Botvinick · Michael Bowling -
2019 Oral: Bayesian Action Decoder for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Francis Song · Edward Hughes · Neil Burch · Iain Dunning · Shimon Whiteson · Matthew Botvinick · Michael Bowling -
2019 Poster: A Baseline for Any Order Gradient Estimation in Stochastic Computation Graphs »
Jingkai Mao · Jakob Foerster · Tim Rocktäschel · Maruan Al-Shedivat · Gregory Farquhar · Shimon Whiteson -
2019 Oral: A Baseline for Any Order Gradient Estimation in Stochastic Computation Graphs »
Jingkai Mao · Jakob Foerster · Tim Rocktäschel · Maruan Al-Shedivat · Gregory Farquhar · Shimon Whiteson -
2018 Poster: The Mechanics of n-Player Differentiable Games »
David Balduzzi · Sebastien Racaniere · James Martens · Jakob Foerster · Karl Tuyls · Thore Graepel -
2018 Poster: QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning »
Tabish Rashid · Mikayel Samvelyan · Christian Schroeder · Gregory Farquhar · Jakob Foerster · Shimon Whiteson -
2018 Oral: The Mechanics of n-Player Differentiable Games »
David Balduzzi · Sebastien Racaniere · James Martens · Jakob Foerster · Karl Tuyls · Thore Graepel -
2018 Oral: QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning »
Tabish Rashid · Mikayel Samvelyan · Christian Schroeder · Gregory Farquhar · Jakob Foerster · Shimon Whiteson -
2018 Poster: DiCE: The Infinitely Differentiable Monte Carlo Estimator »
Jakob Foerster · Gregory Farquhar · Maruan Al-Shedivat · Tim Rocktäschel · Eric Xing · Shimon Whiteson -
2018 Oral: DiCE: The Infinitely Differentiable Monte Carlo Estimator »
Jakob Foerster · Gregory Farquhar · Maruan Al-Shedivat · Tim Rocktäschel · Eric Xing · Shimon Whiteson -
2017 Poster: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson -
2017 Talk: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson -
2017 Poster: Input Switched Affine Networks: An RNN Architecture Designed for Interpretability »
Jakob Foerster · Justin Gilmer · Jan Chorowski · Jascha Sohl-Dickstein · David Sussillo -
2017 Talk: Input Switched Affine Networks: An RNN Architecture Designed for Interpretability »
Jakob Foerster · Justin Gilmer · Jan Chorowski · Jascha Sohl-Dickstein · David Sussillo