Timezone: »

 
Poster
Brainformers: Trading Simplicity for Efficiency
Yanqi Zhou · Nan Du · Yanping Huang · Daiyi Peng · Chang Lan · Da Huang · Siamak Shakeri · David So · Andrew Dai · Yifeng Lu · Zhifeng Chen · Quoc Le · Claire Cui · James Laudon · Jeff Dean

Tue Jul 25 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #719

Transformers are central to recent successes in natural language processing and computer vision. Transformers have a mostly uniform backbone where layers alternate between feed-forward and self-attention in order to build a deep network. Here we investigate this design choice and find that more complex blocks that have different permutations of layer primitives can be more efficient. Using this insight, we develop a complex block, named Brainformer, that consists of a diverse sets of layers such as sparsely gated feed-forward layers, dense feed-forward layers, attention layers, and various forms of layer normalization and activation functions. Brainformer consistently outperforms the state-of-the-art dense and sparse Transformers, in terms of both quality and efficiency. A Brainformer model with 8 billion activated parameters per token demonstrates 2x faster training convergence and 5x faster step time compared to its GLaM counterpart. In downstream task evaluation, Brainformer also demonstrates a 3% higher SuperGLUE score with fine-tuning compared to GLaM with a similar number of activated parameters. Finally, Brainformer largely outperforms a Primer dense model derived with NAS with similar computation per token on fewshot evaluations.

Author Information

Yanqi Zhou (Google)
Nan Du (Google Deepmind)
Yanping Huang (Google Brain)
Daiyi Peng (Google)
Chang Lan (Google)
Da Huang (Google)
Siamak Shakeri (Google.com)
David So (Google Brain)
Andrew Dai (Google Deepmind)

Andrew Dai was awarded an MA in Computer Science at the University of Cambridge before receiving a PhD in Informatics at the University of Edinburgh for text modeling with Bayesian nonparametrics. He then subsequently worked at Google in Mountain View, California in a range of teams including machine translation, Google Now and Google Ads. In 2014, he joined the Google Brain team where he has worked on text representations, semi-supervised learning, sequence models, adversarial training and deep learning on medical data.

Yifeng Lu (Google)
Zhifeng Chen (Google)
Quoc Le (Google Brain)
Claire Cui (Google)
James Laudon (Stanford University)
Jeff Dean (Google Research)

More from the Same Authors