Beyond VLM-Based Rewards: Diffusion-Native Latent Reward Modeling
Abstract
Preference optimization for diffusion models relies on reward functions that are both discriminative and computationally efficient. Vision-Language Models (VLMs) have emerged as powerful reward providers. However, their computation and memory cost can be substantial, and optimizing a latent diffusion generator through a pixel-space reward introduces a domain mismatch that complicates alignment. In this paper, we propose \textbf{DiNa-LRM}, a \textbf{di}ffusion-\textbf{na}tive \textbf{l}atent \textbf{r}eward \textbf{m}odel that formulates preference learning directly on noisy diffusion states. Our method introduces a noise-calibrated Thurstone likelihood with diffusion-noise-dependent uncertainty. DiNa-LRM leverages a pretrained latent diffusion backbone with a timestep-conditioned reward head, and supports inference-time noise ensembling, providing a diffusion-native mechanism for test-time scaling and robust rewarding. Across image alignment benchmarks, DiNa-LRM substantially outperforms existing diffusion-based reward baselines and achieves competitive performance compared to state-of-the-art VLMs while maintaining a substantially lower computational cost. In preference optimization, we demonstrate that DiNa-LRM improves preference optimization dynamics, enabling faster and more resource-efficient model alignment.