Skip to yearly menu bar Skip to main content


Poster

Exploring the Benefit of Activation Sparsity in Pre-training

Zhengyan Zhang · Chaojun Xiao · Qiujieli Qin · Yankai Lin · Zhiyuan Zeng · Xu Han · Zhiyuan Liu · Ruobing Xie · Maosong Sun · Jie Zhou

Hall C 4-9 #610
[ ] [ Project Page ]
Thu 25 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract: Pre-trained Transformers inherently possess the characteristic of sparse activation, where only a small fraction of the neurons are activated for each token. While sparse activation has been explored through post-training methods, its potential in pre-training remains untapped. In this work, we first study how activation properties change during pre-training. Our examination reveals that Transformers exhibit sparse activation throughout the majority of the pre-training process while the activation correlation keeps evolving as training progresses. Leveraging this observation, we propose Switchable Sparse-Dense Learning (SSD). SSD adaptively switches between the Mixtures-of-Experts (MoE) based sparse training and the conventional dense training during the pre-training process, leveraging the efficiency of sparse training and avoiding the static activation correlation of sparse training. Compared to dense training, SSD achieves comparable performance with identical model size and reduces pre-training costs. Moreover, the models trained with SSD can be directly used as MoE models for sparse inference and achieve the same performance as dense models with up to $2\times$ faster inference speed. Codes are available at https://github.com/thunlp/moefication.

Live content is unavailable. Log in and register to view live content