From Bits to Rounds: Parallel Decoding with Exploration for Diffusion Language Models
Abstract
Diffusion Language Models (DLMs) have recently emerged as a strong alternative to autoregressive language models (AR-LMs), due to their comparable accuracy and faster inference speed via parallel decoding. However, standard DLM decoding strategies, which rely on unmasking only high-confidence tokens, encounter an inherent information-theoretic bottleneck that restricts decoding progress and ultimately slows down generation. We demonstrate this through an information-theoretic lower bound that the number of decoding rounds must grow linearly with the sample's total information and inversely with the per-round information budget, establishing a bits-to-rounds principle. Motivated by this theory, we propose Explore-Then-Exploit (ETE), a training-free decoding strategy that maximizes information throughput and decoding efficiency. ETE combines cross-block decoding with targeted exploration of high-uncertainty tokens to reshape the conditional distribution and trigger cascades of confident predictions. Experiments across diverse benchmarks verify our theoretical bounds and demonstrate that ETE consistently reduces the number of decoding rounds compared to confidence-only baselines without compromising generation quality. Furthermore, ETE integrates efficiently with KV caching, translating these algorithmic gains into improved tokens-per-second throughput.