Timezone: »
The number of parameters in large transformers has been observed to grow exponentially. Despite notable performance improvements, concerns have been raised that such a growing model size will run out of data in the near future. As manifested in the neural scaling law, modern learning backbones are not data-efficient. To maintain the utility of the model capacity, training data should be increased proportionally. In this paper, we study the neural scaling law under the previously overlooked data scarcity regime, focusing on the more challenging situation where we need to train a gigantic model with a disproportionately limited supply of available training data. We find that the existing power laws underestimate the data inefficiency of large transformers. Their performance will drop significantly if the training set is insufficient. Fortunately, we discover another blessing - such a data-inefficient scaling law can be restored through a model reusing approach that warm-starts the training of a large model by initializing it using smaller models. Our empirical study shows that model reusing can effectively reproduce the power law under the data scarcity regime. When progressively applying model reusing to expand the model size, we also observe consistent performance improvement in large transformers. We release our code at: https://github.com/VITA-Group/Data-Efficient-Scaling.
Author Information
Peihao Wang (The University of Texas at Austin)
Rameswar Panda (MIT-IBM Watson AI Lab)
Zhangyang “Atlas” Wang (University of Texas at Austin)
More from the Same Authors
-
2023 : H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models »
Zhenyu Zhang · Ying Sheng · Tianyi Zhou · Tianlong Chen · Lianmin Zheng · Ruisi Cai · Zhao Song · Yuandong Tian · Christopher Re · Clark Barrett · Zhangyang “Atlas” Wang · Beidi Chen -
2023 : Atlas Wang »
Zhangyang “Atlas” Wang -
2023 Poster: Learning to Optimize Differentiable Games »
Xuxi Chen · Nelson Vadori · Tianlong Chen · Zhangyang “Atlas” Wang -
2023 Poster: Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights? »
Ruisi Cai · Zhenyu Zhang · Zhangyang “Atlas” Wang -
2023 Oral: Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models »
Ajay Jaiswal · Shiwei Liu · Tianlong Chen · Ding · Zhangyang “Atlas” Wang -
2023 Poster: Are Large Kernels Better Teachers than Transformers for ConvNets? »
Tianjin Huang · Lu Yin · Zhenyu Zhang · Li Shen · Meng Fang · Mykola Pechenizkiy · Zhangyang “Atlas” Wang · Shiwei Liu -
2023 Poster: Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation »
Wenqing Zheng · S P Sharan · Ajay Jaiswal · Kevin Wang · Yihan Xi · Dejia Xu · Zhangyang “Atlas” Wang -
2023 Poster: Towards Constituting Mathematical Structures for Learning to Optimize »
Jialin Liu · Xiaohan Chen · Zhangyang “Atlas” Wang · Wotao Yin · HanQin Cai -
2023 Poster: Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication »
Ajay Jaiswal · Shiwei Liu · Tianlong Chen · Ding · Zhangyang “Atlas” Wang -
2023 Poster: Lowering the Pre-training Tax for Gradient-based Subset Training: A Lightweight Distributed Pre-Training Toolkit »
Yeonju Ro · Zhangyang “Atlas” Wang · Vijay Chidambaram · Aditya Akella -
2023 Poster: Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models »
Ajay Jaiswal · Shiwei Liu · Tianlong Chen · Ding · Zhangyang “Atlas” Wang -
2022 : Invited talk #8 Atlas Wang. Title: “Free Knowledge” in Chest X-rays: Contrastive Learning of Images and Their Radiomics »
Zhangyang “Atlas” Wang -
2022 : APP: Anytime Progressive Pruning »
Diganta Misra · Bharat Runwal · Tianlong Chen · Zhangyang “Atlas” Wang · Irina Rish -
2022 Poster: Data-Efficient Double-Win Lottery Tickets from Robust Pre-training »
Tianlong Chen · Zhenyu Zhang · Sijia Liu · Yang Zhang · Shiyu Chang · Zhangyang “Atlas” Wang -
2022 Poster: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Spotlight: Data-Efficient Double-Win Lottery Tickets from Robust Pre-training »
Tianlong Chen · Zhenyu Zhang · Sijia Liu · Yang Zhang · Shiyu Chang · Zhangyang “Atlas” Wang -
2022 Spotlight: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Poster: Universality of Winning Tickets: A Renormalization Group Perspective »
William T. Redman · Tianlong Chen · Zhangyang “Atlas” Wang · Akshunna S. Dogra -
2022 Poster: VariGrow: Variational Architecture Growing for Task-Agnostic Continual Learning based on Bayesian Novelty »
Randy Ardywibowo · Zepeng Huo · Zhangyang “Atlas” Wang · Bobak Mortazavi · Shuai Huang · Xiaoning Qian -
2022 Poster: Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition »
Haotao Wang · Aston Zhang · Yi Zhu · Shuai Zheng · Mu Li · Alex Smola · Zhangyang “Atlas” Wang -
2022 Poster: Training Your Sparse Neural Network Better with Any Mask »
Ajay Jaiswal · Haoyu Ma · Tianlong Chen · Ying Ding · Zhangyang “Atlas” Wang -
2022 Oral: Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition »
Haotao Wang · Aston Zhang · Yi Zhu · Shuai Zheng · Mu Li · Alex Smola · Zhangyang “Atlas” Wang -
2022 Spotlight: Universality of Winning Tickets: A Renormalization Group Perspective »
William T. Redman · Tianlong Chen · Zhangyang “Atlas” Wang · Akshunna S. Dogra -
2022 Spotlight: Training Your Sparse Neural Network Better with Any Mask »
Ajay Jaiswal · Haoyu Ma · Tianlong Chen · Ying Ding · Zhangyang “Atlas” Wang -
2022 Spotlight: VariGrow: Variational Architecture Growing for Task-Agnostic Continual Learning based on Bayesian Novelty »
Randy Ardywibowo · Zepeng Huo · Zhangyang “Atlas” Wang · Bobak Mortazavi · Shuai Huang · Xiaoning Qian -
2022 Poster: Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets »
Tianlong Chen · Xuxi Chen · Xiaolong Ma · Yanzhi Wang · Zhangyang “Atlas” Wang -
2022 Poster: Removing Batch Normalization Boosts Adversarial Training »
Haotao Wang · Aston Zhang · Shuai Zheng · Xingjian Shi · Mu Li · Zhangyang “Atlas” Wang -
2022 Poster: Neural Implicit Dictionary Learning via Mixture-of-Expert Training »
Peihao Wang · Zhiwen Fan · Tianlong Chen · Zhangyang “Atlas” Wang -
2022 Spotlight: Removing Batch Normalization Boosts Adversarial Training »
Haotao Wang · Aston Zhang · Shuai Zheng · Xingjian Shi · Mu Li · Zhangyang “Atlas” Wang -
2022 Spotlight: Neural Implicit Dictionary Learning via Mixture-of-Expert Training »
Peihao Wang · Zhiwen Fan · Tianlong Chen · Zhangyang “Atlas” Wang -
2022 Spotlight: Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets »
Tianlong Chen · Xuxi Chen · Xiaolong Ma · Yanzhi Wang · Zhangyang “Atlas” Wang -
2021 Poster: Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm »
Mingkang Zhu · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Oral: Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm »
Mingkang Zhu · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Poster: Graph Contrastive Learning Automated »
Yuning You · Tianlong Chen · Yang Shen · Zhangyang “Atlas” Wang -
2021 Poster: Self-Damaging Contrastive Learning »
Ziyu Jiang · Tianlong Chen · Bobak Mortazavi · Zhangyang “Atlas” Wang -
2021 Oral: Graph Contrastive Learning Automated »
Yuning You · Tianlong Chen · Yang Shen · Zhangyang “Atlas” Wang -
2021 Spotlight: Self-Damaging Contrastive Learning »
Ziyu Jiang · Tianlong Chen · Bobak Mortazavi · Zhangyang “Atlas” Wang -
2021 Poster: A Unified Lottery Ticket Hypothesis for Graph Neural Networks »
Tianlong Chen · Yongduo Sui · Xuxi Chen · Aston Zhang · Zhangyang “Atlas” Wang -
2021 Poster: Efficient Lottery Ticket Finding: Less Data is More »
Zhenyu Zhang · Xuxi Chen · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Spotlight: Efficient Lottery Ticket Finding: Less Data is More »
Zhenyu Zhang · Xuxi Chen · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Spotlight: A Unified Lottery Ticket Hypothesis for Graph Neural Networks »
Tianlong Chen · Yongduo Sui · Xuxi Chen · Aston Zhang · Zhangyang “Atlas” Wang -
2020 Poster: Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training »
Xuxi Chen · Wuyang Chen · Tianlong Chen · Ye Yuan · Chen Gong · Kewei Chen · Zhangyang “Atlas” Wang -
2020 Poster: When Does Self-Supervision Help Graph Convolutional Networks? »
Yuning You · Tianlong Chen · Zhangyang “Atlas” Wang · Yang Shen -
2020 Poster: Automated Synthetic-to-Real Generalization »
Wuyang Chen · Zhiding Yu · Zhangyang “Atlas” Wang · Anima Anandkumar -
2020 Poster: Eliminating the Invariance on the Loss Landscape of Linear Autoencoders »
Reza Oftadeh · Jiayi Shen · Zhangyang “Atlas” Wang · Dylan Shell -
2020 Poster: NADS: Neural Architecture Distribution Search for Uncertainty Awareness »
Randy Ardywibowo · Shahin Boluki · Xinyu Gong · Zhangyang “Atlas” Wang · Xiaoning Qian -
2020 Poster: AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks »
Yonggan Fu · Wuyang Chen · Haotao Wang · Haoran Li · Yingyan Lin · Zhangyang “Atlas” Wang