Timezone: »
The cornerstone underpinning deep learning is the guarantee that gradient descent on an objective converges to local minima. Unfortunately, this guarantee fails in settings, such as generative adversarial nets, where there are multiple interacting losses. The behavior of gradient-based methods in games is not well understood -- and is becoming increasingly important as adversarial and multi-objective architectures proliferate. In this paper, we develop new techniques to understand and control the dynamics in general games. The key result is to decompose the second-order dynamics into two components. The first is related to potential games, which reduce to gradient descent on an implicit function; the second relates to Hamiltonian games, a new class of games that obey a conservation law, akin to conservation laws in classical mechanical systems. The decomposition motivates Symplectic Gradient Adjustment (SGA), a new algorithm for finding stable fixed points in general games. Basic experiments show SGA is competitive with recently proposed algorithms for finding local Nash equilibria in GANs -- whilst at the same time being applicable to -- and having guarantees in -- much more general games.
Author Information
David Balduzzi (DeepMind)
Sebastien Racaniere (DeepMind)
James Martens (DeepMind)
Jakob Foerster (Facebook AI Research)
Karl Tuyls (DeepMind)
Thore Graepel (DeepMind)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: The Mechanics of n-Player Differentiable Games »
Thu Jul 12th 02:00 -- 02:20 PM Room A7
More from the Same Authors
-
2020 Poster: Fast computation of Nash Equilibria in Imperfect Information Games »
Remi Munos · Julien Perolat · Jean-Baptiste Lespiau · Mark Rowland · Bart De Vylder · Marc Lanctot · Finbarr Timbers · Daniel Hennes · Shayegan Omidshafiei · Audrunas Gruslys · Mohammad Gheshlaghi Azar · Edward Lockhart · Karl Tuyls -
2020 Poster: Normalizing Flows on Tori and Spheres »
Danilo J. Rezende · George Papamakarios · Sebastien Racaniere · Michael Albergo · Gurtej Kanwar · Phiala Shanahan · Kyle Cranmer -
2019 Poster: An Investigation of Model-Free Planning »
Arthur Guez · Mehdi Mirza · Karol Gregor · Rishabh Kabra · Sebastien Racaniere · Theophane Weber · David Raposo · Adam Santoro · Laurent Orseau · Tom Eccles · Greg Wayne · David Silver · Timothy Lillicrap -
2019 Poster: Open-ended learning in symmetric zero-sum games »
David Balduzzi · Marta Garnelo · Yoram Bachrach · Wojciech Czarnecki · Julien Perolat · Max Jaderberg · Thore Graepel -
2019 Oral: An Investigation of Model-Free Planning »
Arthur Guez · Mehdi Mirza · Karol Gregor · Rishabh Kabra · Sebastien Racaniere · Theophane Weber · David Raposo · Adam Santoro · Laurent Orseau · Tom Eccles · Greg Wayne · David Silver · Timothy Lillicrap -
2019 Oral: Open-ended learning in symmetric zero-sum games »
David Balduzzi · Marta Garnelo · Yoram Bachrach · Wojciech Czarnecki · Julien Perolat · Max Jaderberg · Thore Graepel -
2018 Poster: QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning »
Tabish Rashid · Mikayel Samvelyan · Christian Schroeder · Gregory Farquhar · Jakob Foerster · Shimon Whiteson -
2018 Oral: QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning »
Tabish Rashid · Mikayel Samvelyan · Christian Schroeder · Gregory Farquhar · Jakob Foerster · Shimon Whiteson -
2018 Poster: DiCE: The Infinitely Differentiable Monte Carlo Estimator »
Jakob Foerster · Gregory Farquhar · Maruan Al-Shedivat · Tim Rocktäschel · Eric Xing · Shimon Whiteson -
2018 Oral: DiCE: The Infinitely Differentiable Monte Carlo Estimator »
Jakob Foerster · Gregory Farquhar · Maruan Al-Shedivat · Tim Rocktäschel · Eric Xing · Shimon Whiteson -
2017 Poster: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson -
2017 Talk: Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning »
Jakob Foerster · Nantas Nardelli · Gregory Farquhar · Triantafyllos Afouras · Phil Torr · Pushmeet Kohli · Shimon Whiteson -
2017 Poster: Strongly-Typed Agents are Guaranteed to Interact Safely »
David Balduzzi -
2017 Poster: Neural Taylor Approximations: Convergence and Exploration in Rectifier Networks »
David Balduzzi · Brian McWilliams · Tony Butler-Yeoman -
2017 Poster: The Shattered Gradients Problem: If resnets are the answer, then what is the question? »
David Balduzzi · Marcus Frean · Wan-Duo Ma · Brian McWilliams · Lennox Leary · John Lewis -
2017 Poster: Input Switched Affine Networks: An RNN Architecture Designed for Interpretability »
Jakob Foerster · Justin Gilmer · Jan Chorowski · Jascha Sohl-Dickstein · David Sussillo -
2017 Talk: Strongly-Typed Agents are Guaranteed to Interact Safely »
David Balduzzi -
2017 Talk: Input Switched Affine Networks: An RNN Architecture Designed for Interpretability »
Jakob Foerster · Justin Gilmer · Jan Chorowski · Jascha Sohl-Dickstein · David Sussillo -
2017 Talk: The Shattered Gradients Problem: If resnets are the answer, then what is the question? »
David Balduzzi · Marcus Frean · Wan-Duo Ma · Brian McWilliams · Lennox Leary · John Lewis -
2017 Talk: Neural Taylor Approximations: Convergence and Exploration in Rectifier Networks »
David Balduzzi · Brian McWilliams · Tony Butler-Yeoman