Skip to yearly menu bar Skip to main content


Contributed talk
in
Workshop: Uncertainty and Robustness in Deep Learning

'In-Between' Uncertainty in Bayesian Neural Networks

Yue Kwang Foong


Abstract:

We describe a limitation in the expressiveness of the predictive uncertainty estimate given by mean- field variational inference (MFVI), a popular approximate inference method for Bayesian neural networks. In particular, MFVI fails to give calibrated uncertainty estimates in between separated regions of observations. This can lead to catastrophically overconfident predictions when testing on out-of-distribution data. Avoiding such over-confidence is critical for active learning, Bayesian optimisation and out-of-distribution robustness. We instead find that a classical technique, the linearised Laplace approximation, can handle ‘in- between’ uncertainty much better for small network architectures.

Live content is unavailable. Log in and register to view live content