Timezone: »
Diffusion models have demonstrated excellent potential for generating diverse images. However, their performance often suffers from slow generation due to iterative denoising. Existing distillation methods either require significant amounts of offline computation for generating synthetic training data or need to perform expensive online learning with the help of real data. In this work, we present a novel technique called BOOT, that overcomes these limitations with an efficient data-free distillation algorithm. The core idea is to learn a time-conditioned model that predicts the output of a pre-trained diffusion model teacher given any time step. Such a model can be efficiently trained based on bootstrapping from two consecutive sampled steps. Furthermore, our method can be easily adapted to large-scale text-to-image diffusion models, which are challenging for conventional methods given the fact that the training sets are often large and difficult to access. We demonstrate the effectiveness of our approach on several benchmarks, achieving comparable generation quality while being orders ofmagnitude faster than the diffusion teacher. The text-to-image results show that BOOT is able to handle highly complex distributions, shedding light on efficient generative modeling.
Author Information
Jiatao Gu (Apple (MLR))
Shuangfei Zhai (Apple)
Yizhe Zhang (Machine Learning Research @ )

I am a Research scientist at Apple MLR, primarily working on Natural language processing and Machine Learning. Before joining Apple, I have been at Meta AI and Microsoft Research, working on natural language generation and NLP pre-training.
Lingjie Liu (University of Pennsylvania, University of Pennsylvania)
Joshua M Susskind (Apple, Inc.)
More from the Same Authors
-
2021 : Implicit Acceleration and Feature Learning in Infinitely Wide Neural Networks with Bottlenecks »
Etai Littwin · Omid Saremi · Shuangfei Zhai · Vimal Thilak · Hanlin Goh · Joshua M Susskind · Greg Yang -
2021 : Implicit Greedy Rank Learning in Autoencoders via Overparameterized Linear Networks »
Shih-Yu Sun · Vimal Thilak · Etai Littwin · Omid Saremi · Joshua M Susskind -
2023 Poster: Stabilizing Transformer Training by Preventing Attention Entropy Collapse »
Shuangfei Zhai · Tatiana Likhomanenko · Etai Littwin · Dan Busbridge · Jason Ramapuram · Yizhe Zhang · Jiatao Gu · Joshua M Susskind -
2023 Poster: NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from 3D-aware Diffusion »
Jiatao Gu · Alex Trevithick · Kai-En Lin · Joshua M Susskind · Christian Theobalt · Lingjie Liu · Ravi Ramamoorthi -
2022 Poster: Efficient Representation Learning via Adaptive Context Pooling »
Chen Huang · Walter Talbott · Navdeep Jaitly · Joshua M Susskind -
2022 Spotlight: Efficient Representation Learning via Adaptive Context Pooling »
Chen Huang · Walter Talbott · Navdeep Jaitly · Joshua M Susskind -
2022 Poster: Position Prediction as an Effective Pretraining Strategy »
Shuangfei Zhai · Navdeep Jaitly · Jason Ramapuram · Dan Busbridge · Tatiana Likhomanenko · Joseph Cheng · Walter Talbott · Chen Huang · Hanlin Goh · Joshua M Susskind -
2022 Spotlight: Position Prediction as an Effective Pretraining Strategy »
Shuangfei Zhai · Navdeep Jaitly · Jason Ramapuram · Dan Busbridge · Tatiana Likhomanenko · Joseph Cheng · Walter Talbott · Chen Huang · Hanlin Goh · Joshua M Susskind -
2021 Poster: Uncertainty Weighted Actor-Critic for Offline Reinforcement Learning »
Yue Wu · Shuangfei Zhai · Nitish Srivastava · Joshua M Susskind · Jian Zhang · Ruslan Salakhutdinov · Hanlin Goh -
2021 Spotlight: Uncertainty Weighted Actor-Critic for Offline Reinforcement Learning »
Yue Wu · Shuangfei Zhai · Nitish Srivastava · Joshua M Susskind · Jian Zhang · Ruslan Salakhutdinov · Hanlin Goh -
2020 Poster: Equivariant Neural Rendering »
Emilien Dupont · Miguel Angel Bautista Martin · Alex Colburn · Aditya Sankar · Joshua M Susskind · Qi Shan -
2019 Poster: Addressing the Loss-Metric Mismatch with Adaptive Loss Alignment »
Chen Huang · Shuangfei Zhai · Walter Talbott · Miguel Angel Bautista Martin · Shih-Yu Sun · Carlos Guestrin · Joshua M Susskind -
2019 Oral: Addressing the Loss-Metric Mismatch with Adaptive Loss Alignment »
Chen Huang · Shuangfei Zhai · Walter Talbott · Miguel Angel Bautista Martin · Shih-Yu Sun · Carlos Guestrin · Joshua M Susskind