Self-Prompting Diffusion Transformer for Open-Vocabulary Scene Text Edit via In-Context Learning
Abstract
Scene text editing aims to modify text in a target region of an image while preserving its background style and texture. Existing methods rely solely on image background information while neglecting the visual details of target regions, which discards stylistic features in the original text and essentially degrades the task to text rendering. Moreover, the conditions imposed by pre-trained glyph encoder limit the scope of editable text. To address these issues, this paper proposes a self-prompting scene text editing method, which constructs style and glyph prompts directly from the original image without additional style or glyph encoders. We employ a two-stage training strategy, where the diffusion transformer is first trained on large-scale self-supervised datasets and subsequently refined with a small set of paired images. By leveraging the in-context learning capability of FLUX-Fill, it achieves open-vocabulary and style-consistent text editing. Experimental results on various languages demonstrate that our method achieves the state-of-the-art performance in both text accuracy and style consistency.