Plug-and-Play Guidance for Discrete Diffusion Models via Gradient-Informed Logit Correction
Abstract
Controllable generation with discrete diffusion models is often hindered by high computational overhead or the need for retraining. In this paper, we present Gradient-Informed Logit Correction (GILC), a plug-and-play framework that efficiently estimates guidance signals by repurposing the pretrained denoising network as a variational proxy. To circumvent the gradient instability inherent in high-dimensional discrete spaces, we introduce a Jacobian-free mechanism that directly corrects the clean prediction logits, facilitating stable and effective guidance. Our method accommodates both differentiable and non-differentiable reward functions. Extensive experiments across DNA, protein sequence, and molecular generation tasks demonstrate that GILC achieves state-of-the-art performance without additional training, frequently outperforming fine-tuning approaches.