Rényi Diffusion Models
Abstract
The choice of training objective is central to diffusion-based generative modeling in terms of both sample quality and distribution coverage. While standard maximum likelihood training provides a principled objective with strong theoretical grounding, empirical studies indicate that previous training objectives in diffusion models often face an inverse correlation between likelihood optimization and perceptual evaluations. We propose the Rényi diffusion model, a unified generative framework that formulates training objectives using Rényi divergence. This yields a generalized score matching objective providing explicit control over the trade-off between sample quality and distribution coverage. Experiments demonstrate improved balance between density estimation and sample generation performances across multiple datasets without modifying model architectures or sampling procedures.