Timezone: »
Oral
The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning
Siyuan Ma · Raef Bassily · Mikhail Belkin
In this paper we aim to formally explain the phenomenon of fast convergence of Stochastic Gradient Descent (SGD) observed in modern machine learning. The key observation is that most modern learning architectures are over-parametrized and are trained to interpolate the data by driving the empirical loss (classification and regression) close to zero. While it is still unclear why these interpolated solutions perform well on test data, we show that these regimes allow for fast convergence of SGD, comparable in number of iterations to full gradient descent. For convex loss functions we obtain an exponential convergence bound for {\it mini-batch} SGD parallel to that for full gradient descent. We show that there is a critical batch size $m^*$ such that: (a) SGD iteration with mini-batch size $m\leq m^*$ is nearly equivalent to $m$ iterations of mini-batch size $1$ (\emph{linear scaling regime}). (b) SGD iteration with mini-batch $m> m^*$ is nearly equivalent to a full gradient descent iteration (\emph{saturation regime}). Moreover, for the quadratic loss, we derive explicit expressions for the optimal mini-batch and step size and explicitly characterize the two regimes above. The critical mini-batch size can be viewed as the limit for effective mini-batch parallelization. It is also nearly independent of the data size, implying $O(n)$ acceleration over GD per unit of computation. We give experimental evidence on real data which closely follows our theoretical analyses. Finally, we show how our results fit in the recent developments in training deep neural networks and discuss connections to adaptive rates for SGD and variance reduction.
Author Information
Siyuan Ma (The Ohio State University)
Raef Bassily (Ohio State University)
Mikhail Belkin (Ohio State University)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: The Power of Interpolation: Understanding the Effectiveness of SGD in Modern Over-parametrized Learning »
Wed. Jul 11th 04:15 -- 07:00 PM Room Hall B #204
More from the Same Authors
-
2021 : Non-Euclidean Differentially Private Stochastic Convex Optimization »
Raef Bassily · Cristobal Guzman · Anupama Nandi -
2023 Poster: Faster Rates of Convergence to Stationary Points in Differentially Private Optimization »
Raman Arora · Raef Bassily · Tomás González · Cristobal Guzman · Michael Menart · Enayat Ullah -
2023 Poster: User-level Private Stochastic Convex Optimization with Optimal Rates »
Raef Bassily · Ziteng Sun -
2019 : Panel Discussion (Nati Srebro, Dan Roy, Chelsea Finn, Mikhail Belkin, Aleksander Mądry, Jason Lee) »
Nati Srebro · Daniel Roy · Chelsea Finn · Mikhail Belkin · Aleksander Madry · Jason Lee -
2019 : Keynote by Mikhail Belkin: A Hard Look at Generalization and its Theories »
Mikhail Belkin -
2018 Poster: To Understand Deep Learning We Need to Understand Kernel Learning »
Mikhail Belkin · Siyuan Ma · Soumik Mandal -
2018 Oral: To Understand Deep Learning We Need to Understand Kernel Learning »
Mikhail Belkin · Siyuan Ma · Soumik Mandal