Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Responsible Decision Making in Dynamic Environments

Combining Counterfactuals With Shapley Values To Explain Image Models

Aditya Lahiri · Kamran Alipour · Ehsan Adeli · Babak Salimi


Abstract:

With the widespread use of sophisticated machine learning models in sensitive applications, understanding their decision-making has become an essential task. Models trained on tabular data have witnessed significant progress in explanations of their underlying decision making processes by virtue of having a small number of discrete features. However, applying these methods to high-dimensional inputs such as images is not a trivial task. Images are composed of pixels at an atomic level and do not carry any interpretability by themselves. In this work, we seek to use annotated high-level interpretable features of images to provide explanations. We leverage the Shapley value framework from Game Theory, which has garnered wide acceptance in general XAI problems. By developing a pipeline to generate counterfactuals and subsequently using it to estimate Shapley values, we obtain contrastive and interpretable explanations with strong axiomatic guarantees.

Chat is not available.