Skip to yearly menu bar Skip to main content


Multi-objective training of Generative Adversarial Networks with multiple discriminators

Isabela Albuquerque · Joao Monteiro · Thang Doan · Breandan Considine · Tiago Falk · Ioannis Mitliagkas

Pacific Ballroom #4

Keywords: [ Optimization ] [ Generative Adversarial Networks ] [ Deep Generative Models ]


Recent literature has demonstrated promising results for training Generative Adversarial Networks by employing a set of discriminators, in contrast to the traditional game involving one generator against a single adversary. Such methods perform single-objective optimization on some simple consolidation of the losses, e.g. an arithmetic average. In this work, we revisit the multiple-discriminator setting by framing the simultaneous minimization of losses provided by different models as a multi-objective optimization problem. Specifically, we evaluate the performance of multiple gradient descent and the hypervolume maximization algorithm on a number of different datasets. Moreover, we argue that the previously proposed methods and hypervolume maximization can all be seen as variations of multiple gradient descent in which the update direction can be computed efficiently. Our results indicate that hypervolume maximization presents a better compromise between sample quality and computational cost than previous methods.

Live content is unavailable. Log in and register to view live content