Unified Safe In-context Image Generation in Multimodal Diffusion Transformers
Xiang Yang ⋅ Feifei Li ⋅ Mi Zhang ⋅ Geng Hong ⋅ Xiaoyu You ⋅ Mi Wen ⋅ Min Yang
Abstract
Diffusion transformers (DiTs) equipped with multimodal attention (MM-Attn) have become a dominant paradigm for image generation. However, preventing the generation of harmful content remains a critical challenge, particularly in imageto-image (I2I) editing tasks. Existing safety mechanisms are primarily designed for text-toimage (T2I) synthesis or U-Net-based architectures, which limits their effectiveness for unified safety mitigation in DiT-based frameworks. To bridge this gap, we propose Unified Visual Safety Regulator ( UVR) , a training- free safe generation framework that regulates unsafe semantics in generated images. UVR is grounded in an analysis of attention dynamics from the perspective of information flow in MM-Attn. We identify a taskindependent start-up stage, during which unsafe semantics in output patches rapidly emerge and can be accuratelv localized, where unsafe semantics in output patches quickly emerge and can be precisely localized, followed by task-specific semantic amplification and interference stages, where harmful signals are further propagated and entangled with benign content. Based on these observations, UVR mitigates unsafe generation through unified, targeted attention modulation and explicit restriction of harmful information flow over the identified unsafe output patches. Experiments across various concepts show that UVR achieves state-of-the-art safety performance by achieving 91% and $77\%$ erase rate in image synthesis and editing tasks, while preserving visual quality and fidelity with minimal degradation.
Successful Page Load