Skip to yearly menu bar Skip to main content


Poster

A Study on Transformer Configuration and Training Objective

Fuzhao Xue · Jianghai Chen · Aixin Sun · Xiaozhe Ren · Zangwei Zheng · Xiaoxin He · Yongming Chen · Xin Jiang · Yang You

Exhibit Hall 1 #315
[ ]
[ PDF [ Poster

Abstract: Transformer-based models have delivered impressive results on many tasks, particularly vision and language tasks. In many model training situations, conventional configurations are often adopted. For example, we usually set the base model with hidden size (i.e. model width) to be 768 and the number of transformer layers (i.e. model depth) to be 12. In this paper, we revisit these conventional configurations by studying the the relationship between transformer configuration and training objective. We show that the optimal transformer configuration is closely related to the training objective. Specifically, compared with the simple classification objective, the masked autoencoder is effective in alleviating the over-smoothing issue in deep transformer training. Based on this finding, we propose ``Bamboo'', a notion of using deeper and narrower transformer configurations, for masked autoencoder training. On ImageNet, with such a simple change in configuration, the re-designed Base-level transformer achieves 84.2% top-1 accuracy and outperforms SoTA models like MAE by $0.9\%$. On language tasks, re-designed model outperforms BERT with the default setting by 1.1 points on average, on GLUE benchmark with 8 datasets.

Chat is not available.