Poster
Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
Angelos Katharopoulos · Apoorv Vyas · Nikolaos Pappas · François Fleuret
Keywords: [ Architectures ] [ Deep Sequence Models ] [ Algorithms ] [ Deep Learning - Algorithms ]
Abstract:
Transformers achieve remarkable performance in several tasks but due to their
quadratic complexity, with respect to the input's length, they are
prohibitively slow for very long sequences. To address this limitation, we
express the self-attention as a linear dot-product of kernel feature maps and
make use of the associativity property of matrix products to reduce the
complexity from $\bigO{N^2}$ to $\bigO{N}$, where $N$ is the sequence length.
We show that this formulation permits an iterative implementation that
dramatically accelerates autoregressive transformers and reveals their
relationship to recurrent neural networks. Our \textit{Linear Transformers}
achieve similar performance to vanilla Transformers and they are up to 4000x
faster on autoregressive prediction of very long sequences.
Chat is not available.