Skip to yearly menu bar Skip to main content


Poster

Compositional Image Decomposition with Diffusion Models

Jocelin Su · Nan Liu · Yanbo Wang · Josh Tenenbaum · Yilun Du

Hall C 4-9 #409
[ ]
Wed 24 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

Given an image of a natural scene, we are able to quickly decompose it into a set of components such as objects, lighting, shadows, and foreground. We can then envision a scene where we combine certain components with those from other images, for instance a set of objects from our bedroom and animals from a zoo under the lighting conditions of a forest, even if we have never encountered such a scene before. In this paper, we present a method to decompose an image into such compositional components. Our approach, Decomp Diffusion, is an unsupervised method which, when given a single image, infers a set of different components in the image, each represented by a diffusion model. We demonstrate how components can capture different factors of the scene, ranging from global scene descriptors like shadows or facial expression to local scene descriptors like constituent objects. We further illustrate how inferred factors can be flexibly composed, even with factors inferred from other models, to generate a variety of scenes sharply different than those seen in training time. Code and visualizations are at https://energy-based-model.github.io/decomp-diffusion.

Live content is unavailable. Log in and register to view live content