Poster
Disentangling Trainability and Generalization in Deep Neural Networks
Lechao Xiao · Jeffrey Pennington · Samuel Schoenholz
Virtual
Keywords: [ Bayesian Deep Learning ] [ Deep Learning Theory ] [ Gaussian Processes ] [ Kernel Methods ] [ Deep Learning - Theory ]
A longstanding goal in the theory of deep learning is to characterize the conditions under which a given neural network architecture will be trainable, and if so, how well it might generalize to unseen data. In this work, we provide such a characterization in the limit of very wide and very deep networks, for which the analysis simplifies considerably. For wide networks, the trajectory under gradient descent is governed by the Neural Tangent Kernel (NTK), and for deep networks the NTK itself maintains only weak data dependence. By analyzing the spectrum of the NTK, we formulate necessary conditions for trainability and generalization across a range of architectures, including Fully Connected Networks (FCNs) and Convolutional Neural Networks (CNNs). We identify large regions of hyperparameter space for which networks can memorize the training set but completely fail to generalize. We find that CNNs without global average pooling behave almost identically to FCNs, but that CNNs with pooling have markedly different and often better generalization performance. These theoretical results are corroborated experimentally on CIFAR10 for a variety of network architectures.
We include a \href{https://colab.research.google.com/github/google/neural-tangents/blob/master/notebooks/disentanglingtrainabilityand_generalization.ipynb}{colab} notebook that reproduces the essential results of the paper.