Timezone: »

 
One-Step Diffusion Distillation via Deep Equilibrium Models
Zhengyang Geng · Ashwini Pokle · Zico Kolter
Event URL: https://openreview.net/forum?id=f9eVDYrKXI »
Diffusion models excel at producing high-quality samples but naively require hundreds of iterations, prompting multiple attempts to distill this process into a faster network. Existing approaches, however, often require complex multi-stage distillation and perform sub-optimally in single-step image generation. In response, we introduce a simple yet effective means of diffusion distillation---*directly* mapping initial noise to the resulting image. Of particular importance to our approach is to leverage a new Deep Equilibrium (DEQ) model for distillation: the Generative Equilibrium Transformer (GET). Our method enables fully offline training with just noise/image pairs from the diffusion model while achieving superior performance compared to existing one-step methods on comparable training budgets. The DEQ architecture proves crucial, as GET matches a $5\times$ larger ViT in terms of FID scores while striking a critical balance of computational cost and image quality. Code, checkpoints, and datasets will be released.

Author Information

Zhengyang Geng (Peking University)
Ashwini Pokle (Carnegie Mellon University)
Zico Kolter (Carnegie Mellon University / Bosch Center for AI)

More from the Same Authors