Timezone: »
In the context of few-shot learning, it is currently believed that a fixed pre-trained (PT) model, along with fine-tuning the final layer during evaluation, outperforms standard meta-learning algorithms. We re-evaluate these claims under an in-depth empirical examination of an extensive set of formally diverse datasets and compare PT to Model Agnostic Meta-Learning (MAML). Unlike previous work, we emphasize a fair comparison by using: the same architecture, the same optimizer, and all models trained to convergence.Crucially, we use a more rigorous statistical tool -- the effect size (Cohen's d) -- to determine the practical significance of the difference between a model trained with PT vs. a MAML.We then use a previously proposed metric -- the diversity coefficient -- to compute the average formal diversity of a dataset.Using this analysis, we demonstrate the following:1. when the formal diversity of a data set is low, PT beats MAML on average and 2. when the formal diversity is high, MAML beats PT on average. The caveat is that the magnitude of the average difference between a PT vs. MAML using the effect size is low (according to classical statistical thresholds) -- less than 0.2. Nevertheless, this observation is contrary to the currently held belief that a pre-trained model is always better than a meta-learning model.Our extensive experiments consider 21 few-shot learning benchmarks, including the large-scale few-shot learning dataset Meta-Data set. We also show no significant difference between a MAML model vs. a PT model with GPT-2 on Openwebtext. We, therefore, conclude that a pre-trained model does not always beat a meta-learned model and that the formal diversity of a dataset is a driving factor.
Author Information
Brando Miranda (Stanford University)
Patrick Yu (University of Illinois Urbana-Champaign)
Saumya Goyal (Stanford University)
Yu-Xiong Wang (University of Illinois at Urbana-Champaign)
Sanmi Koyejo (Stanford University)
More from the Same Authors
-
2022 : Is Self-Supervised Contrastive Learning More Robust Than Supervised Learning? »
Yuanyi Zhong · Haoran Tang · Junkun Chen · Jian Peng · Yu-Xiong Wang -
2023 : Beyond Scale: the Diversity Coefficient as a Data Quality Metric Demonstrates LLMs are Pre-trained on Formally Diverse Data »
Alycia Lee · Brando Miranda · Brando Miranda · Sanmi Koyejo -
2023 : Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting »
Rylan Schaeffer · Kateryna Pistunova · Samar Khanna · Sarthak Consul · Sanmi Koyejo -
2023 Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning »
Sijia Liu · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Kathrin Grosse · Baharan Mirzasoleiman · Sanmi Koyejo -
2023 Poster: Learning Lightweight Object Detectors via Multi-Teacher Progressive Distillation »
Shengcao Cao · Mengtian Li · James Hays · Deva Ramanan · Yu-Xiong Wang · Liangyan Gui -
2022 Poster: Generative Modeling for Multi-task Visual Learning »
Zhipeng Bao · Martial Hebert · Yu-Xiong Wang -
2022 Spotlight: Generative Modeling for Multi-task Visual Learning »
Zhipeng Bao · Martial Hebert · Yu-Xiong Wang