Uncertainty Estimation Using a Single Deep Deterministic Neural Network

Joost van Amersfoort · Lewis Smith · Yee Whye Teh · Yarin Gal

Keywords: [ Bayesian Deep Learning ] [ Representation Learning ] [ Supervised Learning ] [ Algorithms ]

[ Abstract ] [ Join Zoom ]
[ Slides
Please do not share or post zoom links


We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass. Our approach, deterministic uncertainty quantification (DUQ), builds upon ideas of RBF networks. We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models. By enforcing detectability of changes in the input using a gradient penalty, we are able to reliably detect out of distribution data. Our uncertainty quantification scales well to large datasets, and using a single model, we improve upon or match Deep Ensembles in out of distribution detection on notable difficult dataset pairs such as FashionMNIST vs. MNIST, and CIFAR-10 vs. SVHN.

Chat is not available.