DLM-Scope: Mechanistic Interpretability of Diffusion Language Models via Sparse Autoencoders
Abstract
Sparse autoencoders (SAEs) have become a standard tool for mechanistic interpretability in autoregressive large language models (LLMs), enabling researchers to extract sparse, human-interpretable features and intervene on model behavior. Recently, as diffusion language models (DLMs) have become an increasingly powerful and promising alternative to autoregressive LLMs, it is essential to develop tailored mechanistic interpretability tools for this emerging class of models. In this work, we present DLM-Scope, the first SAE-based interpretability framework for DLMs, and demonstrate that trained Top-K SAEs can faithfully extract sparse, interpretable features. Notably, we find that inserting SAEs affects DLMs differently from autoregressive LLMs: while SAE insertion in LLMs typically incurs a loss penalty, in DLMs it can reduce cross-entropy loss when applied to early layers, a phenomenon absent or markedly weaker in LLMs. Additionally, SAE features in DLMs enable more effective diffusion-time interventions, often outperforming LLM steering. Moreover, we pioneer new SAE-based research directions for DLMs: we show that SAEs provide useful signals for DLM decoding order, and that SAE features remain stable during DLM post-training. Overall, our work establishes a foundation for mechanistic interpretability in DLMs and highlights the potential of applying SAEs to DLM-related tasks and algorithms.