Timezone: »

 
Poster
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Mitchell Wortsman · Gabriel Ilharco · Samir Gadre · Becca Roelofs · Raphael Gontijo Lopes · Ari Morcos · Hongseok Namkoong · Ali Farhadi · Yair Carmon · Simon Kornblith · Ludwig Schmidt

Thu Jul 21 03:00 PM -- 05:00 PM (PDT) @ Hall E #500

The conventional recipe for maximizing model accuracy is to (1) train multiple models with various hyperparameters and (2) pick the individual model which performs best on a held-out validation set, discarding the remainder. In this paper, we revisit the second step of this procedure in the context of fine-tuning large pre-trained models, where fine-tuned models often appear to lie in a single low error basin. We show that averaging the weights of multiple models fine-tuned with different hyperparameter configurations often improves accuracy and robustness. Unlike a conventional ensemble, we may average many models without incurring any additional inference or memory costs---we call the results “model soups.” When fine-tuning large pre-trained models such as CLIP, ALIGN, and a ViT-G pre-trained on JFT, our soup recipe provides significant improvements over the best model in a hyperparameter sweep on ImageNet. The resulting ViT-G model, which attains 90.94% top-1 accuracy on ImageNet, achieved a new state of the art. Furthermore, we show that the model soup approach extends to multiple image classification and natural language processing tasks, improves out-of-distribution performance, and improves zero-shot performance on new downstream tasks. Finally, we analytically relate the performance similarity of weight-averaging and logit-ensembling to flatness of the loss and confidence of the predictions, and validate this relation empirically. Code is available at https://github.com/mlfoundations/model-soups.

Author Information

Mitchell Wortsman (University of Washington)
Gabriel Ilharco (University of Washington)
Samir Gadre (Columbia University)
Becca Roelofs (Google Research)
Raphael Gontijo Lopes (Google Brain)
Ari Morcos (Facebook AI Research (FAIR))
Hongseok Namkoong (Columbia University)
Ali Farhadi (University of Washington, Allen Institue for AI)
Yair Carmon (Tel Aviv University)
Simon Kornblith (Google Brain)
Ludwig Schmidt (University of Washington)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors