Timezone: »

 
Exploring the Limits of Large Scale Pre-training
Hanie Sedghi

Recent developments in large-scale machine learning suggest that by scaling up data, model size and training time properly, one might observe that improvements in pre-training would transfer favorably to most downstream tasks. In this work, we systematically study this phenomena and establish that, as we increase the upstream accuracy, the performance of downstream tasks saturates. In particular, we investigate more than 4800 experiments on Vision Transformers, MLP-Mixers and ResNets with number of parameters ranging from ten million to ten billion, trained on the largest scale of available image data (JFT, ImageNet21K) and evaluated on more than 20 downstream image recognition tasks. We propose a model for downstream performance that reflects the saturation phenomena and captures the nonlinear relationship in performance of upstream and downstream tasks. Delving deeper to understand the reasons that give rise to these phenomena, we show that the saturation behavior we observe is closely related to the way that representations evolve through the layers of the models.

Author Information

Hanie Sedghi (Google Research, Brain team)
Hanie Sedghi

Hanie Sedghi a Senior Research Scientist at Google DeepMind where she leads the DeepPhenomena team. The focus of her research has been understanding deep learning models to push their boundaries; not just for (out-of-distribution) generalization, but also the broader sense of algorithmic and scientific reasoning capabilities (of large language models). She is a workshop chair for NeurIPS 2022 as well as tutorial chair for ICML 2022 and 2023, a program chair for CoLLAs 2023 and has been an area chair for NeurIPS, ICLR and ICML and a member of JMLR Editorial board for the last few years. Prior to Google, Hanie was a Research Scientist at Allen Institute for Artificial Intelligence and before that, a postdoctoral fellow at UC Irvine. She received her PhD from University of Southern California with a minor in Mathematics.

More from the Same Authors