Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable?

Anna-Kathrin Kopetzki · Bertrand Charpentier · Daniel Zügner · Sandhya Giri · Stephan Günnemann

Keywords: [ Supervised Learning ] [ Algorithms ]

[ Abstract ]
[ Paper ]
[ Visit Poster at Spot B1 in Virtual World ] [ Visit Poster at Spot D0 in Virtual World ]
Wed 21 Jul 9 a.m. PDT — 11 a.m. PDT
Spotlight presentation: Supervised Learning 1
Wed 21 Jul 5 a.m. PDT — 6 a.m. PDT


Dirichlet-based uncertainty (DBU) models are a recent and promising class of uncertainty-aware models. DBU models predict the parameters of a Dirichlet distribution to provide fast, high-quality uncertainty estimates alongside with class predictions. In this work, we present the first large-scale, in-depth study of the robustness of DBU models under adversarial attacks. Our results suggest that uncertainty estimates of DBU models are not robust w.r.t. three important tasks: (1) indicating correctly and wrongly classified samples; (2) detecting adversarial examples; and (3) distinguishing between in-distribution (ID) and out-of-distribution (OOD) data. Additionally, we explore the first approaches to make DBU mod- els more robust. While adversarial training has a minor effect, our median smoothing based ap- proach significantly increases robustness of DBU models.

Chat is not available.