Skip to yearly menu bar Skip to main content


Oral

Accurate Uncertainties for Deep Learning Using Calibrated Regression

Volodymyr Kuleshov · Nathan Fenner · Stefano Ermon

Abstract:

Accounting for uncertainty in modern deep learning algorithms is crucial for building reliable, interpretable, and interactive systems. Existing approaches typically center on Bayesian methods, which may not always accurately capture real-world uncertainty, e.g. a 95% confidence interval may not contain the true outcome 95% of the time. Here, we propose a simple procedure that is guaranteed to calibrate probabilistic forecasts obtained from Bayesian deep learning models as well as general regression algorithms. Our procedure is inspired by Platt scaling for support vector machines and extends existing recalibration methods for classification to regression tasks. We evaluate our method on Bayesian linear regression as well as feedforward and recurrent Bayesian neural networks trained with approximate variational inference. We find that our method produces calibrated uncertainty estimates and improves performance on tasks in time series forecasting and reinforcement learning.

Chat is not available.