Uncertainty Modeling in Generative Compressed Sensing

Yilang Zhang · Mengchu Xu · Xiaojun Mao · Jian Wang


Keywords: [ PM: Bayesian Models and Methods ] [ T: Deep Learning ] [ DL: Generative Models and Autoencoders ]

[ Abstract ]
[ Slides [ Poster [ Paper PDF
Tue 19 Jul 3:30 p.m. PDT — 5:30 p.m. PDT
Spotlight presentation: Deep Learning
Tue 19 Jul 1:15 p.m. PDT — 2:45 p.m. PDT


Compressed sensing (CS) aims to recover a high-dimensional signal with structural priors from its low-dimensional linear measurements. Inspired by the huge success of deep neural networks in modeling the priors of natural signals, generative neural networks have been recently used to replace the hand-crafted structural priors in CS. However, the reconstruction capability of the generative model is fundamentally limited by the range of its generator, typically a small subset of the signal space of interest. To break this bottleneck and thus reconstruct those out-of-range signals, this paper presents a novel method called CS-BGM that can effectively expands the range of generator. Specifically, CS-BGM introduces uncertainties to the latent variable and parameters of the generator, while adopting the variational inference (VI) and maximum a posteriori (MAP) to infer them. Theoretical analysis demonstrates that expanding the range of generators is necessary for reducing the reconstruction error in generative CS. Extensive experiments show a consistent improvement of CS-BGM over the baselines.

Chat is not available.