Timezone: »
Group equivariant convolutional neural networks (G-CNNs) are generalizations of convolutional neural networks (CNNs) which excel in a wide range of technical applications by explicitly encoding symmetries, such as rotations and permutations, in their architectures. Although the success of G-CNNs is driven by their explicit symmetry bias, a recent line of work has proposed that the implicit bias of training algorithms on particular architectures is key to understanding generalization for overparameterized neural nets. In this context, we show that L-layer full-width linear G-CNNs trained via gradient descent for binary classification converge to solutions with low-rank Fourier matrix coefficients, regularized by the 2/L-Schatten matrix norm. Our work strictly generalizes previous analysis on the implicit bias of linear CNNs to linear G-CNNs over all finite groups, including the challenging setting of non-commutative groups (such as permutations), as well as band-limited G-CNNs over infinite groups. We validate our theorems via experiments on a variety of groups, and empirically explore more realistic nonlinear networks, which locally capture similar regularization patterns. Finally, we provide intuitive interpretations of our Fourier space implicit regularization results in real space via uncertainty principles.
Author Information
Hannah Lawrence (MIT)
Bobak T Kiani (MIT)
Kristian Georgiev (MIT)
Andrew Dienes (Massachusetts Institute of Technology)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Implicit Bias of Linear Equivariant Networks »
Wed. Jul 20th through Thu the 21st Room Hall E #520
More from the Same Authors
-
2023 : The Journey, Not the Destination: How Data Guides Diffusion Models »
Kristian Georgiev · Joshua Vendrow · Hadi Salman · Sung Min (Sam) Park · Aleksander Madry -
2023 Oral: Equivariant Polynomials for Graph Neural Networks »
Omri Puny · Derek Lim · Bobak T Kiani · Haggai Maron · Yaron Lipman -
2023 Poster: Equivariant Polynomials for Graph Neural Networks »
Omri Puny · Derek Lim · Bobak T Kiani · Haggai Maron · Yaron Lipman -
2023 Poster: TRAK: Attributing Model Behavior at Scale »
Sung Min (Sam) Park · Kristian Georgiev · Andrew Ilyas · Guillaume Leclerc · Aleksander Madry -
2023 Poster: The SSL Interplay: Augmentations, Inductive Bias, and Generalization »
Vivien Cabannnes · Bobak T Kiani · Randall Balestriero · Yann LeCun · Alberto Bietti -
2023 Oral: TRAK: Attributing Model Behavior at Scale »
Sung Min (Sam) Park · Kristian Georgiev · Andrew Ilyas · Guillaume Leclerc · Aleksander Madry -
2023 Poster: Rethinking Backdoor Attacks »
Alaa Khaddaj · Guillaume Leclerc · Aleksandar Makelov · Kristian Georgiev · Hadi Salman · Andrew Ilyas · Aleksander Madry -
2021 Poster: Adversarial Robustness Guarantees for Random Deep Neural Networks »
Giacomo De Palma · Bobak T Kiani · Seth Lloyd -
2021 Spotlight: Adversarial Robustness Guarantees for Random Deep Neural Networks »
Giacomo De Palma · Bobak T Kiani · Seth Lloyd