Timezone: »
Poster
ReDi: Efficient Learning-Free Diffusion Inference via Trajectory Retrieval
Kexun Zhang · Xianjun Yang · William Wang · Lei Li
Diffusion models show promising generation capability for a variety of data. Despite their high generation quality, the inference for diffusion models is still time-consuming due to the numerous sampling iterations required. To accelerate the inference, we propose ReDi, a simple yet learning-free Retrieval-based Diffusion sampling framework. From a precomputed knowledge base, ReDi retrieves a trajectory similar to the partially generated trajectory at an early stage of generation, skips a large portion of intermediate steps, and continues sampling from a later step in the retrieved trajectory. We theoretically prove that the generation performance of ReDi is guaranteed. Our experiments demonstrate that ReDi improves the model inference efficiency by 2$\times$ speedup. Furthermore, ReDi is able to generalize well in zero-shot cross-domain image generation such as image stylization. The code and demo for ReDi is available at https://github.com/zkx06111/ReDiffusion.
Author Information
Kexun Zhang (University of California, Santa Barbara)
Xianjun Yang (UCSB)
William Wang (UCSB)
Lei Li (University of California Santa Barbara)
More from the Same Authors
-
2022 : Causal Balancing for Domain Generalization »
Xinyi Wang · Michael Saxon · Jiachen Li · Hongyang Zhang · Kun Zhang · William Wang -
2023 : Reasoning Ability Emerges in Large Language Models as Aggregation of Reasoning Paths »
Xinyi Wang · William Wang -
2023 : Generating Global Factual and Counterfactual Explainer for Molecule under Domain Constraints »
Danqing Wang · Antonis Antoniades · Ambuj Singh · Lei Li -
2023 : Reasoning Ability Emerges in Large Language Models as Aggregation of Reasoning Paths »
Xinyi Wang · William Wang -
2023 : Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Learning »
Xinyi Wang · Wanrong Zhu · Michael Saxon · Mark Steyvers · William Wang -
2023 : Generative Autoencoders as Watermark Attackers: Analyses of Vulnerabilities and Threats »
Xuandong Zhao · Kexun Zhang · Yu-Xiang Wang · Lei Li -
2023 : Provable Robust Watermarking for AI-Generated Text »
Xuandong Zhao · Prabhanjan Ananth · Lei Li · Yu-Xiang Wang -
2023 Poster: Offline Reinforcement Learning with Closed-Form Policy Improvement Operators »
Jiachen Li · Edwin Zhang · Ming Yin · Jerry Bai · Yu-Xiang Wang · William Wang -
2023 Poster: Protecting Language Generation Models via Invisible Watermarking »
Xuandong Zhao · Yu-Xiang Wang · Lei Li -
2023 Poster: Importance Weighted Expectation-Maximization for Protein Sequence Design »
Zhenqiao Song · Lei Li -
2022 Poster: On the Learning of Non-Autoregressive Transformers »
Fei Huang · Tianhua Tao · Hao Zhou · Lei Li · Minlie Huang -
2022 Spotlight: On the Learning of Non-Autoregressive Transformers »
Fei Huang · Tianhua Tao · Hao Zhou · Lei Li · Minlie Huang