Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward

Improved Generalization Bounds for Transfer Learning via Neural Collapse

Tomer Galanti · Andras Gyorgy · Marcus Hutter


Abstract:

Using representations learned by large, pretrained models, also called foundation models, in new tasks with fewer data has been successful in a wide range of machine learning problems. Recently, Galanti et al. (2022) introduced a theoretical framework for studying this transfer learning setting for classification. Their analysis is based on the recently observed phenomenon that the features learned by overparameterized deep classification networks show an interesting clustering property, called neural collapse (Papyan et al., 2020). A cornerstone of their analysis demonstrates that neural collapse generalizes from the source classes to new target classes. However, this analysis is limited as it relies on several unrealistic assumptions. In this work, we provide an improved theoretical analysis significantly relaxing these modeling assumptions.

Chat is not available.