Poster

Non-Vacuous Generalisation Bounds for Shallow Neural Networks

Feix Biggs · Benjamin Guedj

Hall E #1432

Keywords: [ DL: Theory ] [ T: Deep Learning ]

[ Abstract ]
[ Poster [ Paper PDF
Wed 20 Jul 3:30 p.m. PDT — 5:30 p.m. PDT
 
Spotlight presentation: Theory
Wed 20 Jul 1:30 p.m. PDT — 3 p.m. PDT

Abstract: We focus on a specific class of shallow neural networks with a single hidden layer, namely those with $L_2$-normalised data and either a sigmoid-shaped Gaussian error function (``erf'') activation or a Gaussian Error Linear Unit (GELU) activation. For these networks, we derive new generalisation bounds through the PAC-Bayesian theory; unlike most existing such bounds they apply to neural networks with deterministic rather than randomised parameters. Our bounds are empirically non-vacuous when the network is trained with vanilla stochastic gradient descent on MNIST and Fashion-MNIST.

Chat is not available.