Skip to yearly menu bar Skip to main content


Poster

Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks

Micah Goldblum · Steven Reich · Liam Fowl · Renkun Ni · Valeriia Cherepanova · Tom Goldstein

Virtual

Keywords: [ Representation Learning ] [ Meta-learning and Automated ML ] [ Transfer, Multitask and Meta-learning ]


Abstract:

Meta-learning algorithms produce feature extractors which achieve state-of-the-art performance on few-shot classification. While the literature is rich with meta-learning methods, little is known about why the resulting feature extractors perform so well. We develop a better understanding of the underlying mechanics of meta-learning and the difference between models trained using meta-learning and models which are trained classically. In doing so, we introduce and verify several hypotheses for why meta-learned models perform better. Furthermore, we develop a regularizer which boosts the performance of standard training routines for few-shot classification. In many cases, our routine outperforms meta-learning while simultaneously running an order of magnitude faster.

Chat is not available.