Skip to yearly menu bar Skip to main content


Poster

Attention Meets Post-hoc Interpretability: A Mathematical Perspective

Gianluigi Lopardo · Frederic Precioso · Damien Garreau


Abstract:

Attention-based architectures, in particular transformers, are at the heart of a technological revolution. Interestingly, in addition to helping obtain state-of-the-art results on a wide range of applications, the attention mechanism intrinsically provides meaningful insights on the internal behavior of the model. Can these insights be used as explanations? Debate rages on. In this paper, we mathematically study a simple attention-based architecture and pinpoint the differences between post-hoc and attention-based explanations. We show that they provide quite different results, and that, despite their limitations, post-hoc methods are capable of capturing more useful insights than merely examining the attention weights.

Live content is unavailable. Log in and register to view live content