Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling

Improving Training of Likelihood-based Generative Models with Gaussian Homotopy

Ba-Hien Tran · Giulio Franzese · Pietro Michiardi · Maurizio Filippone

Keywords: [ Normalizing flows ] [ variational autoencoders ] [ Generative Models ]


Abstract:

Generative Models (GMs) have become popular for their success in various domains. In computer vision, for instance, they are able to generate astonishing realistic-looking images. Likelihood-based GMs are fast at generating new samples, given that they need a single model evaluation per sample, but their sample quality is usually lower than score-based Diffusion Models (DMs). In this work, we verify that the success of score-based DMs is in part due to the process of data smoothing by incorporating this in the training of likelihood-based GMs. In the literature of optimization, this process of data smoothing is referred to as Gaussian homotopy (GH), and it has strong theoretical grounding. Crucially, GH does not incur computational overheads, and it can be implemented by adding one line of code in the optimization loop. Results on image datasets, including Variational Autoencoders and Normalizing Flows, demonstrate significant improvements in generation quality of likelihood-based GMs.

Chat is not available.