Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling

Diffusion Domain Expansion: Learning to Coordinate Pre-Trained Diffusion Models

Egor Lifar · Semyon Savkin · Timur Garipov · Shangyuan Tong · Tommi Jaakkola

Keywords: [ Diffusion Models ] [ model coordination ] [ compositional models ] [ music generation ] [ conditional image generation ]


Abstract:

Generative models are often limited by their training-time domain specifications, such as generated object size. These restrictions hinder their applicability to scenarios requiring the generation of progressively more complex data, such as long music tracks, motivating the need for generative techniques in large domains. In this paper, we propose Diffusion Domain Expansion (DDE), a method that efficiently extends pre-trained diffusion models to generate larger objects and handle more complex conditioning beyond their original capabilities. Our method employs a compact trainable network designed to coordinate the denoised outputs of pre-trained diffusion models. We demonstrate that the coordinator can be universally simple while being capable of generalizing to domains larger than those observed during its training time. We evaluate DDE on long audio track generation and conditional image generation, demonstrating its applicability across domains. DDE outperforms other approaches to coordinated generation with diffusion models in qualitative and quantitative evaluations.

Chat is not available.