Masks Can Be Distracting: On Context Comprehension in Diffusion Language Models
Abstract
Masked Diffusion Language Models (MDLMs) have recently emerged as a promising alternative to Autoregressive Language Models (ARLMs), leveraging a denoising objective that, in principle, should enable more uniform context utilisation. In this work, we examine the context comprehension abilities of MDLMs and uncover two key limitations. First, despite their more global training objective and bidirectional attention mechanism, similarly to ARLMS, MDLMs exhibit a strong locality bias: performance is highly sensitive to the position of relevant information within the input, favouring local over distant context. Second, appending a large number of mask tokens—required for generation—can significantly degrade context comprehension in models trained from scratch. Through systematic ablations, we find that these masks act as distractors, reducing the model's ability to process relevant information. To address and further study this undesirable behaviour, we introduce the mask-agnostic loss function that encourages predictions to remain invariant to the number of appended masks. Fine-tuning with this objective substantially mitigates the distracting effect of masks, improving robustness of MDLMs. Overall, our findings reveal critical limitations of the current MDLM training paradigm, with implications for training, evaluation and deployment.