Timezone: »

Extra-gradient with player sampling for faster convergence in n-player games
Samy Jelassi · Carles Domingo-Enrich · Damien Scieur · Arthur Mensch · Joan Bruna

Thu Jul 16 12:00 PM -- 12:45 PM & Fri Jul 17 01:00 AM -- 01:45 AM (PDT) @ None #None

Data-driven modeling increasingly requires to find a Nash equilibrium in multi-player games, e.g. when training GANs. In this paper, we analyse a new extra-gradient method for Nash equilibrium finding, that performs gradient extrapolations and updates on a random subset of players at each iteration. This approach provably exhibits a better rate of convergence than full extra-gradient for non-smooth convex games with noisy gradient oracle. We propose an additional variance reduction mechanism to obtain speed-ups in smooth convex games. Our approach makes extrapolation amenable to massive multiplayer settings, and brings empirical speed-ups, in particular when using a heuristic cyclic sampling scheme. Most importantly, it allows to train faster and better GANs and mixtures of GANs.

Author Information

Samy Jelassi (Princeton University)
Carles Domingo-Enrich (NYU)
Damien Scieur (Samsung - SAIT AI Lab, Montreal)
Arthur Mensch (ENS)
Joan Bruna (New York University)

More from the Same Authors