Skip to yearly menu bar Skip to main content


Poster

Kernel Mean Matching for Content Addressability of GANs

Wittawat Jitkrittum · Wittawat Jitkrittum · Patsorn Sangkloy · Muhammad Waleed Gondal · Amit Raj · James Hays · Bernhard Schölkopf

Pacific Ballroom #136

Keywords: [ Computer Vision ] [ Deep Generative Models ] [ Generative Adversarial Networks ] [ Kernel Methods ]


Abstract:

We propose a novel procedure which adds "content-addressability" to any given unconditional implicit model e.g., a generative adversarial network (GAN). The procedure allows users to control the generative process by specifying a set (arbitrary size) of desired examples based on which similar samples are generated from the model. The proposed approach, based on kernel mean matching, is applicable to any generative models which transform latent vectors to samples, and does not require retraining of the model. Experiments on various high-dimensional image generation problems (CelebA-HQ, LSUN bedroom, bridge, tower) show that our approach is able to generate images which are consistent with the input set, while retaining the image quality of the original model. To our knowledge, this is the first work that attempts to construct, at test time, a content-addressable generative model from a trained marginal model.

Live content is unavailable. Log in and register to view live content