Bayesian Deep Learning and a Probabilistic Perspective of Model Construction

Andrew Wilson

Mon 13 Jul 8 a.m. — 11 a.m. PDT [ Join Zoom ]
Mon 13 Jul 6 p.m. — 9 p.m. PDT [ Join Zoom ]

Please do not share or post zoom links
[ Slides [ Video Part 1 [ Video Part 2 [ Video Part 3 [ Video Part 4

The videos for each part of this tutorial are linked above. The SlidesLive embed below is the livestream of the entire day including the Q&A.


Bayesian inference is especially compelling for deep neural networks. The key distinguishing property of a Bayesian approach is marginalization instead of optimization. Neural networks are typically underspecified by the data, and can represent many different but high performing models corresponding to different settings of parameters, which is exactly when marginalization will make the biggest difference for accuracy and calibration.

The tutorial has four parts:

Part 1: Introduction to Bayesian modelling and overview (Foundations, overview, Bayesian model averaging in deep learning, epistemic uncertainty, examples)

Part 2: The function-space view (Gaussian processes, infinite neural networks, training a neural network is kernel learning, Bayesian non-parametric deep learning)

Part 3: Practical methods for Bayesian deep learning (Loss landscapes, functional diversity in mode connectivity, SWAG, epistemic uncertainty, calibration, subspace inference, K-FAC Laplace, MC Dropout, stochastic MCMC, Bayes by Backprop, deep ensembles)

Part 4: Bayesian model construction and generalization (Deep ensembles, MultiSWAG, tempering, prior-specification, posterior contraction, re-thinking generalization, double descent, width-depth trade-offs, more!)

Chat is not available.