Poster
Understanding Priors in Bayesian Neural Networks at the Unit Level
Mariia Vladimirova · Jakob Verbeek · Pablo Mesejo · Julyan Arbel

Thu Jun 13th 06:30 -- 09:00 PM @ Pacific Ballroom #90

We investigate deep Bayesian neural networks with Gaussian priors on the weights and a class of ReLU-like nonlinearities. Bayesian neural networks with Gaussian priors are well known to induce an L2, ``weight decay'', regularization. Our results indicate a more intricate regularization effect at the level of the unit activations. Our main result establishes that the induced prior distribution on the units before and after activation becomes increasingly heavy-tailed with the depth of the layer. We show that first layer units are Gaussian, second layer units are sub-exponential, and units in deeper layers are characterized by sub-Weibull distributions. Our results provide new theoretical insight on deep Bayesian neural networks, which we corroborate with simulation experiments.

Author Information

Mariia Vladimirova (Inria)
Jakob Verbeek (INRIA)
Pablo Mesejo (Universidad de Granada)
Julyan Arbel (Inria Grenoble Rhone-Alpes)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors