Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling
Lifted Residual Score Estimation
Tejas Jayashankar · Jongha (Jon) Ryu · Xiangxiang Xu · Gregory Wornell
Keywords: [ VAEs ] [ Diffusion Models ] [ WAEs ] [ implicit encoders ] [ score matching ] [ residual estimation ]
Abstract:
This paper proposes two new techniques to improve the accuracy of score estimation. The first proposal is a new objective function called the *lifted score estimation objective*, which serves as a replacement for the score matching (SM) objective. Instead of minimizing the expected $\ell_2^2$-distance between the learned and true score models, the proposed objective operates in the *lifted space* of the outer-product of a vector-valued function with itself. The distance is defined as the expected squared Frobenius norm of the difference between such matrix-valued objects induced by the learned and true score functions. The second idea is to model and learn the *residual approximation error* of the learned score estimator, given a base score model architecture. We empirically demonstrate that the combination of the two ideas called *lifted residual score estimation* outperforms sliced SM in training VAE and WAE with implicit encoders, and denoising SM in training diffusion models, as evaluated by downstream metrics of sample quality such as the FID score.
Chat is not available.