Timezone: »
Using representations learned by large, pretrained models, also called foundation models, in new tasks with fewer data has been successful in a wide range of machine learning problems. Recently, Galanti et al. (2022) introduced a theoretical framework for studying this transfer learning setting for classification. Their analysis is based on the recently observed phenomenon that the features learned by overparameterized deep classification networks show an interesting clustering property, called neural collapse (Papyan et al., 2020). A cornerstone of their analysis demonstrates that neural collapse generalizes from the source classes to new target classes. However, this analysis is limited as it relies on several unrealistic assumptions. In this work, we provide an improved theoretical analysis significantly relaxing these modeling assumptions.
Author Information
Tomer Galanti (Massachusetts Institute of Technology)
Andras Gyorgy (DeepMind)
Marcus Hutter (DeepMind)
More from the Same Authors
-
2022 : Non-stationary Bandits and Meta-Learning with a Small Set of Optimal Arms »
MohammadJavad Azizi · Thang Duong · Yasin Abbasi-Yadkori · Claire Vernade · Andras Gyorgy · Mohammad Ghavamzadeh -
2023 Poster: Understanding Self-Predictive Learning for Reinforcement Learning »
Yunhao Tang · Zhaohan Guo · Pierre Richemond · Bernardo Avila Pires · Yash Chandak · Remi Munos · Mark Rowland · Mohammad Gheshlaghi Azar · Charline Le Lan · Clare Lyle · Andras Gyorgy · Shantanu Thakoor · Will Dabney · Bilal Piot · Daniele Calandriello · Michal Valko -
2023 Poster: Feature learning in deep classifiers through Intermediate Neural Collapse »
Akshay Rangamani · Marius Lindegaard · Tomer Galanti · Tomaso Poggio -
2023 Poster: Distributed Contextual Linear Bandits with Minimax Optimal Communication Cost »
Sanae Amani · Tor Lattimore · Andras Gyorgy · Lin Yang -
2023 Poster: Memory-Based Meta-Learning on Non-Stationary Distributions »
Tim Genewein · Gregoire Deletang · Anian Ruoss · Li Kevin Wenliang · Elliot Catt · Vincent Dutordoir · Jordi Grau-Moya · Laurent Orseau · Marcus Hutter · Joel Veness -
2023 Poster: Atari-5: Distilling the Arcade Learning Environment down to Five Games »
Matthew Aitchison · Penny Sweetser · Marcus Hutter -
2021 Poster: Adapting to Delays and Data in Adversarial Multi-Armed Bandits »
Andras Gyorgy · Pooria Joulani -
2021 Spotlight: Adapting to Delays and Data in Adversarial Multi-Armed Bandits »
Andras Gyorgy · Pooria Joulani -
2021 Poster: Counterfactual Credit Assignment in Model-Free Reinforcement Learning »
Thomas Mesnard · Theophane Weber · Fabio Viola · Shantanu Thakoor · Alaa Saade · Anna Harutyunyan · Will Dabney · Thomas Stepleton · Nicolas Heess · Arthur Guez · Eric Moulines · Marcus Hutter · Lars Buesing · Remi Munos -
2021 Spotlight: Counterfactual Credit Assignment in Model-Free Reinforcement Learning »
Thomas Mesnard · Theophane Weber · Fabio Viola · Shantanu Thakoor · Alaa Saade · Anna Harutyunyan · Will Dabney · Thomas Stepleton · Nicolas Heess · Arthur Guez · Eric Moulines · Marcus Hutter · Lars Buesing · Remi Munos -
2020 Poster: Non-Stationary Delayed Bandits with Intermediate Observations »
Claire Vernade · Andras Gyorgy · Timothy Mann -
2020 Poster: A simpler approach to accelerated optimization: iterative averaging meets optimism »
Pooria Joulani · Anant Raj · Andras Gyorgy · Csaba Szepesvari -
2019 Poster: Learning from Delayed Outcomes via Proxies with Applications to Recommender Systems »
Timothy Mann · Sven Gowal · Andras Gyorgy · Huiyi Hu · Ray Jiang · Balaji Lakshminarayanan · Prav Srinivasan -
2019 Oral: Learning from Delayed Outcomes via Proxies with Applications to Recommender Systems »
Timothy Mann · Sven Gowal · Andras Gyorgy · Huiyi Hu · Ray Jiang · Balaji Lakshminarayanan · Prav Srinivasan -
2019 Poster: CapsAndRuns: An Improved Method for Approximately Optimal Algorithm Configuration »
Gellért Weisz · Andras Gyorgy · Csaba Szepesvari -
2019 Oral: CapsAndRuns: An Improved Method for Approximately Optimal Algorithm Configuration »
Gellért Weisz · Andras Gyorgy · Csaba Szepesvari -
2018 Poster: LeapsAndBounds: A Method for Approximately Optimal Algorithm Configuration »
Gellért Weisz · Andras Gyorgy · Csaba Szepesvari -
2018 Oral: LeapsAndBounds: A Method for Approximately Optimal Algorithm Configuration »
Gellért Weisz · Andras Gyorgy · Csaba Szepesvari