Timezone: »

 
Poster
Unsupervised Discovery of Interpretable Directions in the GAN Latent Space
Andrey Voynov · Artem Babenko

Thu Jul 16 09:00 AM -- 09:45 AM & Fri Jul 17 12:00 AM -- 12:45 AM (PDT) @

The latent spaces of GAN models often have semantically meaningful directions. Moving in these directions corresponds to human-interpretable image transformations, such as zooming or recoloring, enabling a more controllable generation process. However, the discovery of such directions is currently performed in a supervised manner, requiring human labels, pretrained models, or some form of self-supervision. These requirements severely restrict a range of directions existing approaches can discover. In this paper, we introduce an unsupervised method to identify interpretable directions in the latent space of a pretrained GAN model. By a simple model-agnostic procedure, we find directions corresponding to sensible semantic manipulations without any form of (self-)supervision. Furthermore, we reveal several non-trivial findings, which would be difficult to obtain by existing methods, e.g., a direction corresponding to background removal. As an immediate practical benefit of our work, we show how to exploit this finding to achieve competitive performance for weakly-supervised saliency detection. The implementation of our method is available online.

Author Information

Andrey Voynov (Yandex)
Artem Babenko (Yandex)

More from the Same Authors