Skip to yearly menu bar Skip to main content


Poster

On the statistical rate of nonlinear recovery in generative models with heavy-tailed data

Xiaohan Wei · Zhuoran Yang · Zhaoran Wang

Pacific Ballroom #198

Keywords: [ Computational Learning Theory ] [ Sparsity and Compressed Sensing ] [ Statistical Learning Theory ]


Abstract: We consider estimating a high-dimensional vector from non-linear measurements where the unknown vector is represented by a generative model $G:\mathbb{R}^k\rightarrow\mathbb{R}^d$ with $k\ll d$. Such a model poses structural priors on the unknown vector without having a dedicated basis, and in particular allows new and efficient approaches solving recovery problems with number of measurements far less than the ambient dimension of the vector. While progresses have been made recently regarding theoretical understandings on the linear Gaussian measurements, much less is known when the model is possibly misspecified and the measurements are non-Gaussian. In this paper, we make a step towards such a direction by considering the scenario where the measurements are non-Gaussian, subject to possibly unknown nonlinear transformations and the responses are heavy-tailed. We then propose new estimators via score functions based on the first and second order Stein's identity, and prove the sample size bound of $m=\mathcal{O}(k\varepsilon^{-2}\log(L/\varepsilon))$ achieving an $\varepsilon$ error in the form of exponential concentration inequalities. Furthermore, for the special case of multi-layer ReLU generative model, we improve the sample bound by a logarithm factor to $m=\mathcal{O}(k\varepsilon^{-2}\log(d))$, matching the state-of-art statistical rate in compressed sensing for estimating $k$-sparse vectors. On the technical side, we develop new chaining methods bounding heavy-tailed processes, which could be of independent interest.

Live content is unavailable. Log in and register to view live content