RSTR: Reducing SpatioTemporal Redundancy in Diffusion Transformers
Ruitong Sun ⋅ Tianze Yang ⋅ Wei Niu ⋅ Jin Sun
Abstract
Diffusion Transformers (DiTs) have achieved remarkable success in image generation, yet their deployment is hindered by high computational costs. We identify two sources of redundancy. First, $\textbf{temporal redundancy}$: Classifier-Free Guidance (CFG) applies costly dual forward passes at every timestep, yet guidance matters only at specific steps, and variable scales at critical steps can compensate for skipping others. Second, $\textbf{spatial redundancy}$: under variable guidance, different transformer blocks exhibit heterogeneous sensitivity, yet uniform calibration across all blocks wastes computation while failing to address their varying requirements. We present RSTR, the first framework to jointly reduce spatiotemporal redundancy in diffusion transformers. Stage-1 addresses temporal redundancy through evolutionary search, discovering sparse guidance schedules with variable scales. Stage-2 addresses spatial redundancy through adaptive rank allocation, assigning calibration capacities to transformer regions based on their sensitivity. Experiments on DiT-XL/2, PixArt-$\alpha$, FLUX, and state-of-the-art Qwen-Image demonstrate 50\%-70\% compute savings while maintaining or improving quality. On DiT-XL/2, REST achieves 57\% savings with 15\% FID improvement; on Qwen-Image, 3.43$\times$ speedup with preserved quality.
Successful Page Load