Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Subset Selection in Machine Learning: From Theory to Applications

Sparsifying Transformer Models with Trainable Representation Pooling

Michał Pietruszka · Łukasz Borchmann · Łukasz Garncarek


Abstract: We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on the task-specific parts of an input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust trainable top-$k$ operator. Our experiments on a challenging long document summarization task show that even our simple baseline performs comparably to the current SOTA, and with the trainable selection we can retain its top quality while being $1.8\times$ faster during training, $4.5\times$ faster during inference, and up to $16\times$ more computationally efficient in the decoder. The method can be effortlessly applied to many models used in NLP and CV.