Attention is not all you need: pure attention loses rank doubly exponentially with depth

Yihe Dong · Jean-Baptiste Cordonnier · Andreas Loukas

Keywords: [ Architectures ]

[ Abstract ]
[ Paper ]
[ Visit Poster at Spot C3 in Virtual World ]
Tue 20 Jul 9 a.m. PDT — 11 a.m. PDT
Oral presentation: Deep Learning Applications
Tue 20 Jul 5 a.m. PDT — 6 a.m. PDT


Attention-based architectures have become ubiquitous in machine learning. Yet, our understanding of the reasons for their effectiveness remains limited. This work proposes a new way to understand self-attention networks: we show that their output can be decomposed into a sum of smaller terms---or paths---each involving the operation of a sequence of attention heads across layers. Using this path decomposition, we prove that self-attention possesses a strong inductive bias towards "token uniformity". Specifically, without skip connections or multi-layer perceptrons (MLPs), the output converges doubly exponentially to a rank-1 matrix. On the other hand, skip connections and MLPs stop the output from degeneration. Our experiments verify the convergence results on standard transformer architectures.

Chat is not available.