Timezone: »
The uncanny ability of over-parameterised neural networks to generalise well has been explained using various "simplicity biases". These theories postulate that neural networks avoid overfitting by first fitting simple, linear classifiers before learning more complex, non-linear functions. Meanwhile, data structure is also recognised as a key ingredient for good generalisation, yet its role in simplicity biases is not yet understood. Here, we show that neural networks trained using stochastic gradient descent initially classify their inputs using lower-order input statistics, like mean and covariance, and exploit higher-order statistics only later during training. We first demonstrate this distributional simplicity bias (DSB) in a solvable model of a single neuron trained on synthetic data. We then demonstrate DSB empirically in a range of deep convolutional networks and visual transformers trained on CIFAR10, and show that it even holds in networks pre-trained on ImageNet. We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of Gaussian universality in learning.
Author Information
Maria Refinetti (Laboratoire de Physique de l’Ecole Normale Supérieure Paris)
Alessandro Ingrosso (Abdus Salam international centre for theoretical physics)
Sebastian Goldt (International School of Advanced Studies (SISSA))
I'm an assistant professor working on theories of learning in neural networks.
More from the Same Authors
-
2022 Poster: The dynamics of representation learning in shallow, non-linear autoencoders »
Maria Refinetti · Sebastian Goldt -
2022 Poster: Fluctuations, Bias, Variance & Ensemble of Learners: Exact Asymptotics for Convex Losses in High-Dimension »
Bruno Loureiro · Cedric Gerbelot · Maria Refinetti · Gabriele Sicuro · FLORENT KRZAKALA -
2022 Poster: Maslow's Hammer in Catastrophic Forgetting: Node Re-Use vs. Node Activation »
Sebastian Lee · Stefano Sarao Mannelli · Claudia Clopath · Sebastian Goldt · Andrew Saxe -
2022 Spotlight: Maslow's Hammer in Catastrophic Forgetting: Node Re-Use vs. Node Activation »
Sebastian Lee · Stefano Sarao Mannelli · Claudia Clopath · Sebastian Goldt · Andrew Saxe -
2022 Spotlight: The dynamics of representation learning in shallow, non-linear autoencoders »
Maria Refinetti · Sebastian Goldt -
2022 Spotlight: Fluctuations, Bias, Variance & Ensemble of Learners: Exact Asymptotics for Convex Losses in High-Dimension »
Bruno Loureiro · Cedric Gerbelot · Maria Refinetti · Gabriele Sicuro · FLORENT KRZAKALA -
2021 Poster: Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed »
Maria Refinetti · Sebastian Goldt · FLORENT KRZAKALA · Lenka Zdeborova -
2021 Poster: Align, then memorise: the dynamics of learning with feedback alignment »
Maria Refinetti · Stéphane d'Ascoli · Ruben Ohana · Sebastian Goldt -
2021 Spotlight: Align, then memorise: the dynamics of learning with feedback alignment »
Maria Refinetti · Stéphane d'Ascoli · Ruben Ohana · Sebastian Goldt -
2021 Spotlight: Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed »
Maria Refinetti · Sebastian Goldt · FLORENT KRZAKALA · Lenka Zdeborova -
2021 Poster: Continual Learning in the Teacher-Student Setup: Impact of Task Similarity »
Sebastian Lee · Sebastian Goldt · Andrew Saxe -
2021 Spotlight: Continual Learning in the Teacher-Student Setup: Impact of Task Similarity »
Sebastian Lee · Sebastian Goldt · Andrew Saxe -
2020 Poster: Double Trouble in Double Descent: Bias and Variance(s) in the Lazy Regime »
Stéphane d'Ascoli · Maria Refinetti · Giulio Biroli · Florent Krzakala -
2019 : Poster discussion »
Roman Novak · Maxime Gabella · Frederic Dreyer · Siavash Golkar · Anh Tong · Irina Higgins · Mirco Milletari · Joe Antognini · Sebastian Goldt · Adín Ramírez Rivera · Roberto Bondesan · Ryo Karakida · Remi Tachet des Combes · Michael Mahoney · Nicholas Walker · Stanislav Fort · Samuel Smith · Rohan Ghosh · Aristide Baratin · Diego Granziol · Stephen Roberts · Dmitry Vetrov · Andrew Wilson · César Laurent · Valentin Thomas · Simon Lacoste-Julien · Dar Gilboa · Daniel Soudry · Anupam Gupta · Anirudh Goyal · Yoshua Bengio · Erich Elsen · Soham De · Stanislaw Jastrzebski · Charles H Martin · Samira Shabanian · Aaron Courville · Shorato Akaho · Lenka Zdeborova · Ethan Dyer · Maurice Weiler · Pim de Haan · Taco Cohen · Max Welling · Ping Luo · zhanglin peng · Nasim Rahaman · Loic Matthey · Danilo J. Rezende · Jaesik Choi · Kyle Cranmer · Lechao Xiao · Jaehoon Lee · Yasaman Bahri · Jeffrey Pennington · Greg Yang · Jiri Hron · Jascha Sohl-Dickstein · Guy Gur-Ari -
2019 : Analyzing the dynamics of online learning in over-parameterized two-layer neural networks »
Sebastian Goldt