Timezone: »

 
Oral
Accurate Uncertainties for Deep Learning Using Calibrated Regression
Volodymyr Kuleshov · Nathan Fenner · Stefano Ermon

Thu Jul 12 07:20 AM -- 07:30 AM (PDT) @ A4

Accounting for uncertainty in modern deep learning algorithms is crucial for building reliable, interpretable, and interactive systems. Existing approaches typically center on Bayesian methods, which may not always accurately capture real-world uncertainty, e.g. a 95% confidence interval may not contain the true outcome 95% of the time. Here, we propose a simple procedure that is guaranteed to calibrate probabilistic forecasts obtained from Bayesian deep learning models as well as general regression algorithms. Our procedure is inspired by Platt scaling for support vector machines and extends existing recalibration methods for classification to regression tasks. We evaluate our method on Bayesian linear regression as well as feedforward and recurrent Bayesian neural networks trained with approximate variational inference. We find that our method produces calibrated uncertainty estimates and improves performance on tasks in time series forecasting and reinforcement learning.

Author Information

Volodymyr Kuleshov (Stanford University)
Nathan Fenner (Afresh Technologies)
Stefano Ermon (Stanford University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors