Variable-Length Tokenization via Learnable Global Merging for Diffusion Transformers
Dong Hoon Lee ⋅ Seunghoon Hong
Abstract
Latent Diffusion Models (LDMs) have become dominant in visual synthesis, but their quality–compute trade-off is largely constrained by the tokenizer’s fixed compression ratio. Variable-length tokenizers (VLTs) promise adaptive compression by varying token counts, allowing diffusion models to flexibly balance quality and compute. However, conventional VLTs modulate length by truncating ordered token sequences, which changes token semantics across lengths and breaks representational alignment. This leads to significant cross-length variation in the latent distribution that hinders a single variable-length diffusion model from operating effectively. To address this, we propose a novel variable-length tokenizer that modulates length by merging tokens. We show that encouraging similar tokens to merge enables direct cross-length representation alignment when the diffusion transformer operates according to the merging pattern. Since conventional merging methods are data-dependent, making the merging pattern inaccessible during generation, we introduce learnable global merging, which is data-independent, to ensure compatibility with diffusion transformers. On ImageNet 256$\times$256 generation, our merging-based variable-length tokenizer integrated with a diffusion transformer achieves a superior gFID–compute trade-off compared to prior VLT methods.
Successful Page Load