Skip to yearly menu bar Skip to main content


Tutorial

Bayesian Deep Learning and a Probabilistic Perspective of Model Construction

Andrew Wilson

Virtual

Abstract:

Bayesian inference is especially compelling for deep neural networks. The key distinguishing property of a Bayesian approach is marginalization instead of optimization. Neural networks are typically underspecified by the data, and can represent many different but high performing models corresponding to different settings of parameters, which is exactly when marginalization will make the biggest difference for accuracy and calibration.

The tutorial has four parts:

Part 1: Introduction to Bayesian modelling and overview (Foundations, overview, Bayesian model averaging in deep learning, epistemic uncertainty, examples)

Part 2: The function-space view (Gaussian processes, infinite neural networks, training a neural network is kernel learning, Bayesian non-parametric deep learning)

Part 3: Practical methods for Bayesian deep learning (Loss landscapes, functional diversity in mode connectivity, SWAG, epistemic uncertainty, calibration, subspace inference, K-FAC Laplace, MC Dropout, stochastic MCMC, Bayes by Backprop, deep ensembles)

Part 4: Bayesian model construction and generalization (Deep ensembles, MultiSWAG, tempering, prior-specification, posterior contraction, re-thinking generalization, double descent, width-depth trade-offs, more!)

Chat is not available.