Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures

Mohamed El Amine Seddik · Cosme Louart · Mohamed Tamaazousti · Romain COUILLET


Keywords: [ Deep Learning Theory ] [ Matrix/Tensor Methods ] [ Deep Learning - Theory ]

[ Abstract ]
[ Slides
Tue 14 Jul 2 p.m. PDT — 2:45 p.m. PDT
Wed 15 Jul 1 a.m. PDT — 1:45 a.m. PDT

Abstract: This paper shows that deep learning (DL) representations of data produced by generative adversarial nets (GANs) are random vectors which fall within the class of so-called \textit{concentrated} random vectors. Further exploiting the fact that Gram matrices, of the type $G = X^\intercal X$ with $X=[x_1,\ldots,x_n]\in \mathbb{R}^{p\times n}$ and $x_i$ independent concentrated random vectors from a mixture model, behave asymptotically (as $n,p\to \infty$) as if the $x_i$ were drawn from a Gaussian mixture, suggests that DL representations of GAN-data can be fully described by their first two statistical moments for a wide range of standard classifiers. Our theoretical findings are validated by generating images with the BigGAN model and across different popular deep representation networks.

Chat is not available.