Timezone: »
When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations?
Niladri Chatterji · Phil Long · Peter Bartlett
We establish conditions under which gradient descent applied to fixed-width deep networks drives the logistic loss to zero, and prove bounds on the rate of convergence. Our analysis applies for smoothed approximations to the ReLU, such as Swish and the Huberized ReLU, proposed in previous applied work. We provide two sufficient conditions for convergence. The first is simply a bound on the loss at initialization. The second is a data separation condition used in prior analyses.
Author Information
Niladri Chatterji (UC Berkeley)
Phil Long (Google)
Peter Bartlett (UC Berkeley)
More from the Same Authors
-
2021 : Finite-Sample Analysis of Interpolating Linear Classifiers in the Overparameterized Regime »
Niladri Chatterji · Phil Long -
2021 : On the Theory of Reinforcement Learning with Once-per-Episode Feedback »
Niladri Chatterji · Aldo Pacchiano · Peter Bartlett · Michael Jordan -
2021 : On the Theory of Reinforcement Learning with Once-per-Episode Feedback »
Niladri Chatterji · Aldo Pacchiano · Peter Bartlett · Michael Jordan -
2021 : Adversarial Examples in Random Deep Networks »
Peter Bartlett -
2020 Poster: On Thompson Sampling with Langevin Algorithms »
Eric Mazumdar · Aldo Pacchiano · Yian Ma · Michael Jordan · Peter Bartlett -
2020 Poster: Accelerated Message Passing for Entropy-Regularized MAP Inference »
Jonathan Lee · Aldo Pacchiano · Peter Bartlett · Michael Jordan -
2019 Poster: Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett -
2019 Poster: Scale-free adaptive planning for deterministic dynamics & discounted rewards »
Peter Bartlett · Victor Gabillon · Jennifer Healey · Michal Valko -
2019 Oral: Scale-free adaptive planning for deterministic dynamics & discounted rewards »
Peter Bartlett · Victor Gabillon · Jennifer Healey · Michal Valko -
2019 Oral: Defending Against Saddle Point Attack in Byzantine-Robust Distributed Learning »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett -
2019 Poster: Rademacher Complexity for Adversarially Robust Generalization »
Dong Yin · Kannan Ramchandran · Peter Bartlett -
2019 Poster: Online learning with kernel losses »
Niladri Chatterji · Aldo Pacchiano · Peter Bartlett -
2019 Oral: Rademacher Complexity for Adversarially Robust Generalization »
Dong Yin · Kannan Ramchandran · Peter Bartlett -
2019 Oral: Online learning with kernel losses »
Niladri Chatterji · Aldo Pacchiano · Peter Bartlett -
2018 Poster: Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks »
Peter Bartlett · Dave Helmbold · Phil Long -
2018 Poster: On the Theory of Variance Reduction for Stochastic Gradient Monte Carlo »
Niladri Chatterji · Nicolas Flammarion · Yian Ma · Peter Bartlett · Michael Jordan -
2018 Oral: On the Theory of Variance Reduction for Stochastic Gradient Monte Carlo »
Niladri Chatterji · Nicolas Flammarion · Yian Ma · Peter Bartlett · Michael Jordan -
2018 Oral: Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks »
Peter Bartlett · Dave Helmbold · Phil Long -
2018 Poster: Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett -
2018 Oral: Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates »
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett -
2017 Poster: Recovery Guarantees for One-hidden-layer Neural Networks »
Kai Zhong · Zhao Song · Prateek Jain · Peter Bartlett · Inderjit Dhillon -
2017 Talk: Recovery Guarantees for One-hidden-layer Neural Networks »
Kai Zhong · Zhao Song · Prateek Jain · Peter Bartlett · Inderjit Dhillon