Timezone: »
Training deep neural networks with stochastic gradient descent (SGD) can often achieve zero training loss on real-world tasks although the optimization landscape is known to be highly non-convex. To understand the success of SGD for training deep neural networks, this work presents a mean-field analysis of deep residual networks, based on a line of works which interpret the continuum limit of the deep residual network as an ordinary differential equation as the the network capacity tends to infinity. Specifically, we propose a \textbf{new continuum limit} of deep residual networks, which enjoys a good landscape in the sense that \textbf{every local minimizer is global}. This characterization enables us to derive the first global convergence result for multilayer neural networks in the mean-field regime. Furthermore, our proof does not rely on the convexity of the loss landscape, but instead, an assumption on the global minimizer should achieve zero loss which can be achieved when the model shares a universal approximation property. Key to our result is the observation that a deep residual network resembles a shallow network ensemble~\cite{veit2016residual}, \emph{i.e.} a two-layer network. We bound the difference between the shallow network and our ResNet model via the adjoint sensitivity method, which enables us to transfer previous mean-field analysis of two-layer networks to deep networks. Furthermore, we propose several novel training schemes based on our new continuous model, among which one new training procedure introduces the operation of switching the order of the residual blocks and results in strong empirical performance on benchmark datasets.
Author Information
Yiping Lu (Stanford University)
Chao Ma (Princeton University)
Yulong Lu (Duke University)
Jianfeng Lu (Duke University)
Lexing Ying (Stanford University)
More from the Same Authors
-
2021 : Stateful Performative Gradient Descent »
Zachary Izzo · James Zou · Lexing Ying -
2023 : Deep Equilibrium Based Neural Operators for Steady-State PDEs »
Tanya Marwah · Ashwini Pokle · Zico Kolter · Zachary Lipton · Jianfeng Lu · Andrej Risteski -
2023 Poster: Neural Network Approximations of PDEs Beyond Linearity: A Representational Perspective »
Tanya Marwah · Zachary Lipton · Jianfeng Lu · Andrej Risteski -
2023 Poster: Global optimality of Elman-type RNNs in the mean-field regime »
Andrea Agazzi · Jianfeng Lu · Sayan Mukherjee -
2023 Poster: Improved Analysis of Score-based Generative Modeling: User-Friendly Bounds under Minimal Smoothness Assumptions »
Hongrui Chen · Holden Lee · Jianfeng Lu -
2023 Poster: On Enhancing Expressive Power via Compositions of Single Fixed-Size ReLU Network »
Shijun Zhang · Jianfeng Lu · Hongkai Zhao -
2021 Poster: Top-k eXtreme Contextual Bandits with Arm Hierarchy »
Rajat Sen · Alexander Rakhlin · Lexing Ying · Rahul Kidambi · Dean Foster · Daniel Hill · Inderjit Dhillon -
2021 Spotlight: Top-k eXtreme Contextual Bandits with Arm Hierarchy »
Rajat Sen · Alexander Rakhlin · Lexing Ying · Rahul Kidambi · Dean Foster · Daniel Hill · Inderjit Dhillon -
2021 Poster: How to Learn when Data Reacts to Your Model: Performative Gradient Descent »
Zachary Izzo · Lexing Ying · James Zou -
2021 Spotlight: How to Learn when Data Reacts to Your Model: Performative Gradient Descent »
Zachary Izzo · Lexing Ying · James Zou