Timezone: »
To obtain uncertainty estimates with real-world Bayesian deep learning models, practical inference approximations are needed. Dropout variational inference (VI) for example has been used for machine vision and medical applications, but VI can severely underestimates model uncertainty. Alpha-divergences are alternative divergences to VI's KL objective, which are able to avoid VI's uncertainty underestimation. But these are hard to use in practice: existing techniques can only use Gaussian approximating distributions, and require existing models to be changed radically, thus are of limited use for practitioners. We propose a re-parametrisation of the alpha-divergence objectives, deriving a simple inference technique which, together with dropout, can be easily implemented with existing models by simply changing the loss of the model. We demonstrate improved uncertainty estimates and accuracy compared to VI in dropout networks. We study our model's epistemic uncertainty far away from the data using adversarial images, showing that these can be distinguished from non-adversarial images by examining our model's uncertainty.
Author Information
Yingzhen Li (University of Cambridge)
Yarin Gal (University of Cambridge)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Talk: Dropout Inference in Bayesian Neural Networks with Alpha-divergences »
Wed. Aug 9th 03:30 -- 03:48 AM Room Darling Harbour Theatre
More from the Same Authors
-
2018 Poster: Disentangled Sequential Autoencoder »
Yingzhen Li · Stephan Mandt -
2018 Oral: Disentangled Sequential Autoencoder »
Yingzhen Li · Stephan Mandt -
2017 Poster: Deep Bayesian Active Learning with Image Data »
Yarin Gal · Riashat Islam · Zoubin Ghahramani -
2017 Talk: Deep Bayesian Active Learning with Image Data »
Yarin Gal · Riashat Islam · Zoubin Ghahramani