Skip to yearly menu bar Skip to main content


Spotlight

XAI for Transformers: Better Explanations through Conservative Propagation

Ameen Ali · Thomas Schnake · Oliver Eberle · GrĂ©goire Montavon · Klaus-robert Mueller · Lior Wolf

Ballroom 3 & 4

Abstract:

Transformers have become an important workhorse of machine learning, with numerous applications. This necessitates the development of reliable methods for increasing their transparency. Multiple interpretability methods, often based on gradient information, have been proposed. We show that the gradient in a Transformer reflects the function only locally, and thus fails to reliably identify the contribution of input features to the prediction. We identify Attention Heads and LayerNorm as main reasons for such unreliable explanations and propose a more stable way for propagation through these layers. Our proposal, which can be seen as a proper extension of the well-established LRP method to Transformers, is shown both theoretically and empirically to overcome the deficiency of a simple gradient-based approach, and achieves state-of-the-art explanation performance on a broad range of Transformer models and datasets.

Chat is not available.