Timezone: »

ReDi: Efficient Learning-Free Diffusion Inference via Trajectory Retrieval
Kexun Zhang · Xianjun Yang · William Wang · Lei Li

Tue Jul 25 02:00 PM -- 04:30 PM (PDT) @ Exhibit Hall 1 #544
Diffusion models show promising generation capability for a variety of data. Despite their high generation quality, the inference for diffusion models is still time-consuming due to the numerous sampling iterations required. To accelerate the inference, we propose ReDi, a simple yet learning-free Retrieval-based Diffusion sampling framework. From a precomputed knowledge base, ReDi retrieves a trajectory similar to the partially generated trajectory at an early stage of generation, skips a large portion of intermediate steps, and continues sampling from a later step in the retrieved trajectory. We theoretically prove that the generation performance of ReDi is guaranteed. Our experiments demonstrate that ReDi improves the model inference efficiency by 2$\times$ speedup. Furthermore, ReDi is able to generalize well in zero-shot cross-domain image generation such as image stylization. The code and demo for ReDi is available at https://github.com/zkx06111/ReDiffusion.

Author Information

Kexun Zhang (University of California, Santa Barbara)
Xianjun Yang (UCSB)
William Wang (UCSB)
Lei Li (University of California Santa Barbara)

More from the Same Authors