Timezone: »

 
BOOT: Data-free Distillation of Denoising Diffusion Models with Bootstrapping
Jiatao Gu · Shuangfei Zhai · Yizhe Zhang · Lingjie Liu · Joshua M Susskind
Event URL: https://openreview.net/forum?id=ZeM7S01Xi8 »

Diffusion models have demonstrated excellent potential for generating diverse images. However, their performance often suffers from slow generation due to iterative denoising. Existing distillation methods either require significant amounts of offline computation for generating synthetic training data or need to perform expensive online learning with the help of real data. In this work, we present a novel technique called BOOT, that overcomes these limitations with an efficient data-free distillation algorithm. The core idea is to learn a time-conditioned model that predicts the output of a pre-trained diffusion model teacher given any time step. Such a model can be efficiently trained based on bootstrapping from two consecutive sampled steps. Furthermore, our method can be easily adapted to large-scale text-to-image diffusion models, which are challenging for conventional methods given the fact that the training sets are often large and difficult to access. We demonstrate the effectiveness of our approach on several benchmarks, achieving comparable generation quality while being orders ofmagnitude faster than the diffusion teacher. The text-to-image results show that BOOT is able to handle highly complex distributions, shedding light on efficient generative modeling.

Author Information

Jiatao Gu (Apple (MLR))
Shuangfei Zhai (Apple)
Yizhe Zhang (Machine Learning Research @ )
Yizhe Zhang

I am a Research scientist at Apple MLR, primarily working on Natural language processing and Machine Learning. Before joining Apple, I have been at Meta AI and Microsoft Research, working on natural language generation and NLP pre-training.

Lingjie Liu (University of Pennsylvania, University of Pennsylvania)
Joshua M Susskind (Apple, Inc.)

More from the Same Authors