Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Socially Responsible Machine Learning

Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates

Dan Ley · Umang Bhatt · Adrian Weller


Abstract:

To interpret uncertainty estimates from differentiable probabilistic models, Antorán et al. (2021) proposed generating a single Counterfactual Latent Uncertainty Explanation (CLUE) for a given data point where the model is uncertain. Ley et al. (2021) formulated δ-CLUE, the set of CLUEs within a δ ball of the original input in latent space- however, we find that many CLUEs generated by this method are very similar, hence redundant. Here we propose DIVerse CLUEs (∇-CLUEs), a set of CLUEs which each provide a distinct explanation as to how one can decrease the uncertainty associated with an input. We further introduce GLobal AMortised CLUEs (GLAM-CLUEs), which represent amortised mappings that apply to specific groups of uncertain inputs, taking them and efficiently transforming them in a single function call into inputs that a model will be certain about. Our experiments show that ∇-CLUEs and GLAM-CLUEs both address shortcomings of CLUE and provide beneficial explanations of uncertainty estimates to practitioners.

Chat is not available.