Normalized Energy Models for Linear Inverse Problems
Nicolas M Zilberstein ⋅ Florentin Guth ⋅ Santiago Segarra ⋅ Eero Simoncelli
Abstract
Generative diffusion models can provide powerful priors for inverse problems in imaging, but existing implementations suffer from two key limitations: $(i)$ they learn only an implicit approximation of the prior density, and $(ii)$ they rely on crude likelihood approximations that introduce biases in the sampling. We address these challenges by introducing a new energy-based model trained using denoising score matching with a covariance-based regularization that enforces consistency across different inverse problems. Our approach learns explicit, normalized posterior densities for diverse linear inverse problems using a single model, while preserving the sampling capabilities of diffusion models. This enables new capabilities unavailable to score-based diffusion models: energy-guided adaptive sampling that adjusts schedules on-the-fly, unbiased MCMC correction with Metropolis-Hastings acceptance, and blind degradation estimation via Bayes rule. We validate our method on multiple datasets (MNIST, CelebA, ImageNet) and tasks (inpainting, deblurring), demonstrating competitive or superior performance to established baselines.
Successful Page Load