Skip to yearly menu bar Skip to main content


Poster

Compositional Image Decomposition with Diffusion Models

Jocelin Su · Nan Liu · Yanbo Wang · Josh Tenenbaum · Yilun Du


Abstract:

Given an image of a natural scene, we are able to quickly decompose it into a set of components such as objects, lighting, shadows, and foreground. We can then picture how the image would look if we were to recombine certain components with those from other images, for instance producing a scene with a set of objects from our bedroom and animals from a zoo under the lighting conditions of a forest, even if we have never seen such a scene in real life before. We present a method to decompose an image into such compositional components. Our approach, Decomp Diffusion, is an unsupervised method which, when given a single image, infers a set of different components in the image, each represented by a diffusion model. We demonstrate how components can capture different factors of the scene, ranging from global scene descriptors (e.g., shadows, foreground, facial expression) to local scene descriptors (e.g., objects). We further illustrate how inferred factors can be flexibly composed, even with factors inferred from other models, to generate a variety of scenes sharply different than those seen in training time.

Live content is unavailable. Log in and register to view live content