Skip to yearly menu bar Skip to main content


Poster

Provably Better Explanations with Optimized Aggregation of Feature Attributions

Thomas Decker · Ananta Bhattarai · Jindong Gu · Volker Tresp · Florian Buettner


Abstract:

Post-hoc explanation of opaque machine learning models through feature attributions is a common practice for understanding and verifying their predictions. Despite the numerous techniques available, individual methods often produce inconsistent and unstable results putting their overall reliability into question. In this work, we aim to systematically improve the quality of feature attributions by combining multiple explanations across distinct methods or their variations. For this purpose, we propose a novel approach to derive optimal convex combinations of feature attributions that yield provable improvements of desired quality criteria such as robustness or faithfulness to the model behavior. Through extensive experiments involving various model architectures and popular feature attribution techniques, we demonstrate that our combination strategy consistently outperforms individual methods and existing baselines.

Live content is unavailable. Log in and register to view live content