Timezone: »
Poster
On the Convergence of Gradient Flow on Multi-layer Linear Models
Hancheng Min · Rene Vidal · Enrique Mallada
In this paper, we analyze the convergence of gradient flow on a multi-layer linear model with a loss function of the form $f(W_1W_2\cdots W_L)$. We show that when $f$ satisfies the gradient dominance property, proper weight initialization leads to exponential convergence of the gradient flow to a global minimum of the loss. Moreover, the convergence rate depends on two trajectory-specific quantities that are controlled by the weight initialization: the *imbalance matrices*, which measure the difference between the weights of adjacent layers, and the least singular value of the *weight product* $W=W_1W_2\cdots W_L$. Our analysis exploits the fact that the gradient of the overparameterized loss can be written as the composition of the non-overparametrized gradient with a time-varying (weight-dependent) linear operator whose smallest eigenvalue controls the convergence rate. The key challenge we address is to derive a uniform lower bound for this time-varying eigenvalue that lead to improved rates for several multi-layer network models studied in the literature.
Author Information
Hancheng Min (Johns Hopkins University)
Rene Vidal (University of Pennsylvania)
Enrique Mallada (Johns Hopkins University)
More from the Same Authors
-
2023 Workshop: HiLD: High-dimensional Learning Dynamics Workshop »
Courtney Paquette · Zhenyu Liao · Mihai Nica · Elliot Paquette · Andrew Saxe · Rene Vidal -
2023 Poster: Learning Globally Smooth Functions on Manifolds »
Juan Cervino · Luiz Chamon · Benjamin Haeffele · Rene Vidal · Alejandro Ribeiro -
2023 Poster: The Ideal Continual Learner: An Agent That Never Forgets »
Liangzu Peng · Paris Giampouras · Rene Vidal -
2022 Poster: Understanding Doubly Stochastic Clustering »
Tianjiao Ding · Derek Lim · Rene Vidal · Benjamin Haeffele -
2022 Spotlight: Understanding Doubly Stochastic Clustering »
Tianjiao Ding · Derek Lim · Rene Vidal · Benjamin Haeffele -
2022 Poster: Reverse Engineering $\ell_p$ attacks: A block-sparse optimization approach with recovery guarantees »
Darshan Thaker · Paris Giampouras · Rene Vidal -
2022 Spotlight: Reverse Engineering $\ell_p$ attacks: A block-sparse optimization approach with recovery guarantees »
Darshan Thaker · Paris Giampouras · Rene Vidal -
2021 Poster: Dual Principal Component Pursuit for Robust Subspace Learning: Theory and Algorithms for a Holistic Approach »
Tianyu Ding · Zhihui Zhu · Rene Vidal · Daniel Robinson -
2021 Spotlight: Dual Principal Component Pursuit for Robust Subspace Learning: Theory and Algorithms for a Holistic Approach »
Tianyu Ding · Zhihui Zhu · Rene Vidal · Daniel Robinson -
2021 Poster: Understanding the Dynamics of Gradient Flow in Overparameterized Linear models »
Salma Tarmoun · Guilherme Franca · Benjamin Haeffele · Rene Vidal -
2021 Poster: On the Explicit Role of Initialization on the Convergence and Implicit Bias of Overparametrized Linear Networks »
Hancheng Min · Salma Tarmoun · Rene Vidal · Enrique Mallada -
2021 Spotlight: On the Explicit Role of Initialization on the Convergence and Implicit Bias of Overparametrized Linear Networks »
Hancheng Min · Salma Tarmoun · Rene Vidal · Enrique Mallada -
2021 Spotlight: Understanding the Dynamics of Gradient Flow in Overparameterized Linear models »
Salma Tarmoun · Guilherme Franca · Benjamin Haeffele · Rene Vidal -
2021 Poster: A Nullspace Property for Subspace-Preserving Recovery »
Mustafa D Kaba · Chong You · Daniel Robinson · Enrique Mallada · Rene Vidal -
2021 Spotlight: A Nullspace Property for Subspace-Preserving Recovery »
Mustafa D Kaba · Chong You · Daniel Robinson · Enrique Mallada · Rene Vidal -
2019 Poster: Noisy Dual Principal Component Pursuit »
Tianyu Ding · Zhihui Zhu · Tianjiao Ding · Yunchen Yang · Daniel Robinson · Manolis Tsakiris · Rene Vidal -
2019 Oral: Noisy Dual Principal Component Pursuit »
Tianyu Ding · Zhihui Zhu · Tianjiao Ding · Yunchen Yang · Daniel Robinson · Manolis Tsakiris · Rene Vidal