Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Challenges in Deployable Generative AI

Interpolating between Images with Diffusion Models

Clinton Wang · Polina Golland

Keywords: [ Video Generation ] [ image interpolation ] [ denoising diffusion model ] [ latent diffusion model ] [ image editing ]


Abstract:

One little-explored frontier of image generation and editing is the task of interpolating between two input images, a feature missing from all currently deployed image generation pipelines. We argue that such a feature can expand the creative applications of such models, and propose a method for zero-shot controllable interpolation using latent diffusion models. We apply interpolation in latent space at a sequence of decreasing noise levels, then perform denoising conditioned on interpolated text embeddings derived from textual inversion and (optionally) subject poses derived from OpenPose. For greater consistency, or to specify additional criteria, we can generate several candidates and use CLIP to select the highest quality image. We obtain convincing interpolations across diverse subject poses, image styles, and image content, and show that standard quantitative metrics such as FID are insufficient to identify successful interpolations.

Chat is not available.