Timezone: »
We provide a detailed analysis of the dynamics ofthe gradient flow in overparameterized two-layerlinear models. A particularly interesting featureof this model is that its nonlinear dynamics can beexactly solved as a consequence of a large num-ber of conservation laws that constrain the systemto follow particular trajectories. More precisely,the gradient flow preserves the difference of theGramian matrices of the input and output weights,and its convergence to equilibrium depends onboth the magnitude of that difference (which isfixed at initialization) and the spectrum of the data.In addition, and generalizing prior work, we proveour results without assuming small, balanced orspectral initialization for the weights. Moreover,we establish interesting mathematical connectionsbetween matrix factorization problems and differ-ential equations of the Riccati type.
Author Information
Salma Tarmoun (Johns Hopkins University)
Guilherme Franca (UC Berkeley)
Benjamin Haeffele (Johns Hopkins University)
Rene Vidal (Johns Hopkins University, USA)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Understanding the Dynamics of Gradient Flow in Overparameterized Linear models »
Thu. Jul 22nd 12:45 -- 12:50 AM Room
More from the Same Authors
-
2023 Workshop: HiLD: High-dimensional Learning Dynamics Workshop »
Courtney Paquette · Zhenyu Liao · Mihai Nica · Elliot Paquette · Andrew Saxe · Rene Vidal -
2023 Poster: On the Convergence of Gradient Flow on Multi-layer Linear Models »
Hancheng Min · Rene Vidal · Enrique Mallada -
2023 Poster: Learning Globally Smooth Functions on Manifolds »
Juan Cervino · Luiz Chamon · Benjamin Haeffele · Rene Vidal · Alejandro Ribeiro -
2023 Poster: The Ideal Continual Learner: An Agent That Never Forgets »
Liangzu Peng · Paris Giampouras · Rene Vidal -
2022 Poster: Understanding Doubly Stochastic Clustering »
Tianjiao Ding · Derek Lim · Rene Vidal · Benjamin Haeffele -
2022 Spotlight: Understanding Doubly Stochastic Clustering »
Tianjiao Ding · Derek Lim · Rene Vidal · Benjamin Haeffele -
2022 Poster: Reverse Engineering $\ell_p$ attacks: A block-sparse optimization approach with recovery guarantees »
Darshan Thaker · Paris Giampouras · Rene Vidal -
2022 Spotlight: Reverse Engineering $\ell_p$ attacks: A block-sparse optimization approach with recovery guarantees »
Darshan Thaker · Paris Giampouras · Rene Vidal -
2021 Poster: Dual Principal Component Pursuit for Robust Subspace Learning: Theory and Algorithms for a Holistic Approach »
Tianyu Ding · Zhihui Zhu · Rene Vidal · Daniel Robinson -
2021 Spotlight: Dual Principal Component Pursuit for Robust Subspace Learning: Theory and Algorithms for a Holistic Approach »
Tianyu Ding · Zhihui Zhu · Rene Vidal · Daniel Robinson -
2021 Poster: On the Explicit Role of Initialization on the Convergence and Implicit Bias of Overparametrized Linear Networks »
Hancheng Min · Salma Tarmoun · Rene Vidal · Enrique Mallada -
2021 Spotlight: On the Explicit Role of Initialization on the Convergence and Implicit Bias of Overparametrized Linear Networks »
Hancheng Min · Salma Tarmoun · Rene Vidal · Enrique Mallada -
2021 Poster: A Nullspace Property for Subspace-Preserving Recovery »
Mustafa D Kaba · Chong You · Daniel Robinson · Enrique Mallada · Rene Vidal -
2021 Spotlight: A Nullspace Property for Subspace-Preserving Recovery »
Mustafa D Kaba · Chong You · Daniel Robinson · Enrique Mallada · Rene Vidal -
2019 Poster: Noisy Dual Principal Component Pursuit »
Tianyu Ding · Zhihui Zhu · Tianjiao Ding · Yunchen Yang · Daniel Robinson · Manolis Tsakiris · Rene Vidal -
2019 Oral: Noisy Dual Principal Component Pursuit »
Tianyu Ding · Zhihui Zhu · Tianjiao Ding · Yunchen Yang · Daniel Robinson · Manolis Tsakiris · Rene Vidal -
2018 Poster: ADMM and Accelerated ADMM as Continuous Dynamical Systems »
Guilherme Franca · Daniel Robinson · Rene Vidal -
2018 Oral: ADMM and Accelerated ADMM as Continuous Dynamical Systems »
Guilherme Franca · Daniel Robinson · Rene Vidal