Both Semantics and Reconstruction Matter: Making Representation Encoders Ready for Text-to-Image Generation and Editing
Abstract
Modern Latent Diffusion Models (LDMs) typically operate in low-level Variational Autoencoder (VAE) latent spaces that are primarily optimized for pixel-level reconstruction. To unify vision generation and understanding, a burgeoning trend is to adopt high-dimensional features from representation encoders as generative latents. However, we empirically identify two fundamental obstacles in this paradigm: (1) the discriminative feature space lacks compact regularization, making diffusion models prone to off-manifold latents that lead to inaccurate object structures; and (2) the encoder’s inherently weak pixel-level reconstruction hinders the generator from learning accurate fine-grained geometry and texture. In this paper, we propose a systematic framework to adapt understanding-oriented encoder features for generative tasks. We introduce a semantic–pixel reconstruction objective to regularize the latent space, enabling the compression of both semantic information and fine-grained details into a highly compact representation (96 channels with 16x spatial downsampling). This design allows the latent space to remain semantically rich while achieving state-of-the-art image reconstruction, and remains compact enough for accurate generation. Leveraging this representation, we design a unified text-to-image (T2I) and image editing model. Across diverse generation spaces, our approach achieves state-of-the-art reconstruction, faster convergence, and substantial gains in both T2I and editing tasks, demonstrating that representation encoders can be effectively adapted into robust generative components. An illustrative code example is provided in the supplementary material.