Timezone: »
The current standard approach to scaling transformer language models trains each model size from a different random initialization. As an alternative, we consider a staged training setup that begins with a small model and incrementally increases the amount of compute used for training by applying a "growth operator" to increase the model depth and width. By initializing each stage with the output of the previous one, the training process effectively re-uses the compute from prior stages and becomes more efficient. Our growth operators each take as input the entire training state (including model parameters, optimizer state, learning rate schedule, etc.) and output a new training state from which training continues. We identify two important properties of these growth operators, namely that they preserve both the loss and the ``training dynamics'' after applying the operator. While the loss-preserving property has been discussed previously, to the best of our knowledge this work is the first to identify the importance of preserving the training dynamics (the rate of decrease of the loss during training). To find the optimal schedule for stages, we use the scaling laws from (Kaplan et al., 2020) to find a precise schedule that gives the most compute saving by starting a new stage when training efficiency starts decreasing. We empirically validate our growth operators and staged training for autoregressive language models, showing up to 22% compute savings compared to a strong baseline trained from scratch. Our code is available at https://github.com/allenai/staged-training.
Author Information
Sheng Shen (University of California, Berkeley)
Pete Walsh (Allen Institute of AI)
Kurt Keutzer (UC Berkeley)
Jesse Dodge (University of Washington)
Matthew Peters (AI2)
Iz Beltagy (Allen Institute for AI (AI2))
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Staged Training for Transformer Language Models »
Thu. Jul 21st 07:55 -- 08:00 PM Room Hall G
More from the Same Authors
-
2023 Poster: Poisoning Language Models During Instruction Tuning »
Alexander Wan · Eric Wallace · Sheng Shen · Dan Klein -
2022 Poster: What Language Model Architecture and Pretraining Objective Works Best for Zero-Shot Generalization? »
Thomas Wang · Adam Roberts · Daniel Hesslow · Teven Le Scao · Hyung Won Chung · Iz Beltagy · Julien Launay · Colin Raffel -
2022 Spotlight: What Language Model Architecture and Pretraining Objective Works Best for Zero-Shot Generalization? »
Thomas Wang · Adam Roberts · Daniel Hesslow · Teven Le Scao · Hyung Won Chung · Iz Beltagy · Julien Launay · Colin Raffel -
2020 : Brainstorming & Closing »
Mayoore Jaiswal · Ryan Lowe · Jesse Dodge · Jessica Forde · Rosanne Liu -
2020 : Q&A: Jason Hartford »
Jason Hartford · Jesse Dodge -
2020 : Q&A: Chris Maddison »
Chris Maddison · Jessica Forde · Jesse Dodge -
2020 : Q&A: Margaret Mitchell »
Jesse Dodge -
2020 : Invited Talk: Margaret Mitchell »
Jesse Dodge -
2020 : Q&A: Dani Yogatama »
Dani Yogatama · Jesse Dodge · Jessica Forde -
2020 Workshop: MLRetrospectives: A Venue for Self-Reflection in ML Research »
Jessica Forde · Jesse Dodge · Mayoore Jaiswal · Rosanne Liu · Ryan Lowe · Rosanne Liu · Joelle Pineau · Yoshua Bengio -
2020 Poster: PowerNorm: Rethinking Batch Normalization in Transformers »
Sheng Shen · Zhewei Yao · Amir Gholaminejad · Michael Mahoney · Kurt Keutzer -
2020 Poster: Adversarial Filters of Dataset Biases »
Ronan Le Bras · Swabha Swayamdipta · Chandra Bhagavatula · Rowan Zellers · Matthew Peters · Ashish Sabharwal · Yejin Choi -
2020 Poster: Train Big, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers »
Zhuohan Li · Eric Wallace · Sheng Shen · Kevin Lin · Kurt Keutzer · Dan Klein · Joseph Gonzalez