Timezone: »

EL-Attention: Memory Efficient Lossless Attention for Generation
Yu Yan · Jiusheng Chen · Weizhen Qi · Nikhil Bhendawade · Yeyun Gong · Nan Duan · Ruofei Zhang

Thu Jul 22 07:45 AM -- 07:50 AM (PDT) @

Transformer model with multi-head attention requires caching intermediate results for efficient inference in generation tasks. However, cache brings new memory-related costs and prevents leveraging larger batch size for faster speed. We propose memory-efficient lossless attention (called EL-attention) to address this issue. It avoids heavy operations for building multi-head keys and values, cache for them is not needed. EL-attention constructs an ensemble of attention results by expanding query while keeping key and value shared. It produces the same result as multi-head attention with less GPU memory and faster inference speed. We conduct extensive experiments on Transformer, BART, and GPT-2 for summarization and question generation tasks. The results show EL-attention speeds up existing models by 1.6x to 5.3x without accuracy loss.

Author Information

Yu Yan (Microsoft)
Jiusheng Chen (Microsoft)
Weizhen Qi (University of Science and Technology of China)
Nikhil Bhendawade (Microsoft)
Yeyun Gong (Microsoft Research Asia)
Nan Duan (Microsoft Research)
Ruofei Zhang (Microsoft)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors