Timezone: »
Model parallelism has become a necessity for training modern large-scale deep language models. In this work, we identify a new and orthogonal dimension from existing model parallel approaches: it is possible to perform pipeline parallelism within a single training sequence for Transformer-based language models thanks to its autoregressive property. This enables a more fine-grained pipeline compared with previous work. With this key idea, we design TeraPipe, a high-performance token-level pipeline parallel algorithm for synchronous model-parallel training of Transformer-based language models. We develop a novel dynamic programming-based algorithm to calculate the optimal pipelining execution scheme given a specific model and cluster configuration. We show that TeraPipe can speed up the training by 5.0x for the largest GPT-3 model with 175 billion parameters on an AWS cluster with 48 p3.16xlarge instances compared with state-of-the-art model-parallel methods. The code for reproduction can be found at https://github.com/zhuohan123/terapipe
Author Information
Zhuohan Li (UC Berkeley)
Siyuan Zhuang (UC Berkeley)
Shiyuan Guo (University of California, Berkeley)
Danyang Zhuo (Duke University)
Hao Zhang (CMU)
Dawn Song (University of California, Berkeley)
Ion Stoica (UC Berkeley)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: TeraPipe: Token-Level Pipeline Parallelism for Training Large-Scale Language Models »
Wed. Jul 21st 12:20 -- 12:25 AM Room
More from the Same Authors
-
2023 Poster: FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU »
Ying Sheng · Lianmin Zheng · Binhang Yuan · Zhuohan Li · Max Ryabinin · Beidi Chen · Percy Liang · Christopher Re · Ion Stoica · Ce Zhang -
2023 Oral: FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU »
Ying Sheng · Lianmin Zheng · Binhang Yuan · Zhuohan Li · Max Ryabinin · Beidi Chen · Percy Liang · Christopher Re · Ion Stoica · Ce Zhang -
2022 : Single, Practical and Fast Dynamic Truncation Kernel Multiplication »
Lianke Qin · Somdeb Sarkhel · Zhao Song · Danyang Zhuo -
2022 : Inter-Operator Parallelism »
Zhuohan Li -
2022 Tutorial: Welcome to the "Big Model" Era: Techniques and Systems to Train and Serve Bigger Models »
Hao Zhang · Lianmin Zheng · Zhuohan Li · Ion Stoica -
2021 Poster: Resource Allocation in Multi-armed Bandit Exploration: Overcoming Sublinear Scaling with Adaptive Parallelism »
Brijen Thananjeyan · Kirthevasan Kandasamy · Ion Stoica · Michael Jordan · Ken Goldberg · Joseph E Gonzalez -
2021 Oral: Resource Allocation in Multi-armed Bandit Exploration: Overcoming Sublinear Scaling with Adaptive Parallelism »
Brijen Thananjeyan · Kirthevasan Kandasamy · Ion Stoica · Michael Jordan · Ken Goldberg · Joseph E Gonzalez -
2020 Workshop: Incentives in Machine Learning »
Boi Faltings · Yang Liu · David Parkes · Goran Radanovic · Dawn Song -
2020 Poster: Train Big, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers »
Zhuohan Li · Eric Wallace · Sheng Shen · Kevin Lin · Kurt Keutzer · Dan Klein · Joseph Gonzalez -
2019 : Panel Discussion (moderator: Tom Dietterich) »
Max Welling · Kilian Weinberger · Terrance Boult · Dawn Song · Thomas Dietterich -
2019 : Keynote by Dawn Song: Adversarial Machine Learning: Challenges, Lessons, and Future Directions »
Dawn Song -
2019 Workshop: Workshop on the Security and Privacy of Machine Learning »
Nicolas Papernot · Florian Tramer · Bo Li · Dan Boneh · David Evans · Somesh Jha · Percy Liang · Patrick McDaniel · Jacob Steinhardt · Dawn Song -
2019 Poster: Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules »
Daniel Ho · Eric Liang · Peter Chen · Ion Stoica · Pieter Abbeel -
2019 Oral: Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules »
Daniel Ho · Eric Liang · Peter Chen · Ion Stoica · Pieter Abbeel