d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation
Yu-Yang Qian ⋅ Junda Su ⋅ Lanxiang Hu ⋅ Peiyuan Zhang ⋅ Zhijie Deng ⋅ Peng Zhao ⋅ Hao Zhang
Abstract
Diffusion large language models (dLLMs) offer capabilities beyond those of autoregressive (AR) LLMs, such as parallel decoding and random-order generation. However, realizing these benefits in practice is non-trivial, as dLLMs inherently face an *accuracy-parallelism trade-off*. Despite increasing interest, existing methods typically focus on only one-side of the coin, targeting either efficiency or performance. To address this limitation, we propose d3LLM (*Pseudo-Distilled Diffusion Large Language Model*), striking a balance between accuracy and parallelism: (i) during training, we introduce *pseudo-trajectory distillation* to teach the model which tokens can be decoded confidently at early steps, thereby improving parallelism; (ii) during inference, we employ *entropy-based multi-block decoding* with a KV-cache refresh mechanism to achieve high parallelism while maintaining accuracy. To better evaluate dLLMs, we also introduce AUP (*Accuracy Under Parallelism*), a new metric that jointly measures accuracy and parallelism. Experiments demonstrate that our d3LLM achieves up to $10\times$ speedup over vanilla LLaDA/Dream, and up to $5\times$ speedup over the AR models (Qwen-2.5-7B) without much accuracy degradation.
Successful Page Load