Skip to yearly menu bar Skip to main content


Poster

Rethinking Attention-Model Explainability through Faithfulness Violation Test

Yibing Liu · Haoliang Li · Yangyang Guo · Chenqi KONG · Jing Li · Shiqi Wang

Hall E #415

Keywords: [ SA: Trustworthy Machine Learning ] [ SA: Accountability, Transparency and Interpretability ] [ DL: Attention Mechanisms ]


Abstract:

Attention mechanisms are dominating the explainability of deep models. They produce probability distributions over the input, which are widely deemed as feature-importance indicators. However, in this paper, we find one critical limitation in attention explanations: weakness in identifying the polarity of feature impact. This would be somehow misleading -- features with higher attention weights may not faithfully contribute to model predictions; instead, they can impose suppression effects. With this finding, we reflect on the explainability of current attention-based techniques, such as Attention ⨀ Gradient and LRP-based attention explanations. We first propose an actionable diagnostic methodology (henceforth faithfulness violation test) to measure the consistency between explanation weights and the impact polarity. Through the extensive experiments, we then show that most tested explanation methods are unexpectedly hindered by the faithfulness violation issue, especially the raw attention. Empirical analyses on the factors affecting violation issues further provide useful observations for adopting explanation methods in attention models.

Chat is not available.