Skip to yearly menu bar Skip to main content


Poster

Going beyond compositional generalization, DDPM can produce zero-shot interpolation

Justin Deschenaux · Igor Krawczuk · Grigorios Chrysos · Volkan Cevher


Abstract:

Denoising Diffusion Probabilistic Models (DDPMs) exhibit remarkable capabilities in image generation, with studies suggesting that they can generalize by composing latent factors learned from the training data. In this work, we go further and study DDPMs trained on strictly separate subsets of the data distribution with large gaps on the support of the latent factors. We show that such a model can effectively generate images in the unexplored, intermediate regions of the distribution. For instance, when trained on clearly smiling and non-smiling faces, we demonstrate a sampling procedure which can generate slightly smiling faces without reference images (zero-shot interpolation). We replicate these findings for other attributes of the CelebA dataset as well as synthetic images.

Live content is unavailable. Log in and register to view live content