Timezone: »
Recent works have cast some light on the mystery of why deep nets fit any data and generalize despite being very overparametrized. This paper analyzes training and generalization for a simple 2-layer ReLU net with random initialization, and provides the following improvements over recent works:
(i) Using a tighter characterization of training speed than recent papers, an explanation for why training a neural net with random labels leads to slower training, as originally observed in [Zhang et al. ICLR'17].
(ii) Generalization bound independent of network size, using a data-dependent complexity measure. Our measure distinguishes clearly between random labels and true labels on MNIST and CIFAR, as shown by experiments. Moreover, recent papers require sample complexity to increase (slowly) with the size, while our sample complexity is completely independent of the network size.
(iii) Learnability of a broad class of smooth functions by 2-layer ReLU nets trained via gradient descent.
The key idea is to track dynamics of training and generalization via properties of a related kernel.
Author Information
Sanjeev Arora ( Princeton University and Institute for Advanced Study)
Simon Du (Carnegie Mellon University)
Wei Hu (Princeton University)
Zhiyuan Li (Princeton University)
Ruosong Wang (Carnegie Mellon University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks »
Wed. Jun 12th 01:30 -- 04:00 AM Room Pacific Ballroom #75
More from the Same Authors
-
2021 : Online Sub-Sampling for Reinforcement Learning with General Function Approximation »
Dingwen Kong · Ruslan Salakhutdinov · Ruosong Wang · Lin Yang -
2023 : The Marginal Value of Momentum for Small Learning Rate SGD »
Runzhe Wang · Sadhika Malladi · Tianhao Wang · Kaifeng Lyu · Zhiyuan Li -
2023 : Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization »
Kaiyue Wen · Tengyu Ma · Zhiyuan Li -
2023 : Chain of Thought Empowers Transformers to Solve Inherently Serial Problems »
Zhiyuan Li · Hong Liu · Denny Zhou · Tengyu Ma -
2023 : Fine-Tuning Language Models with Just Forward Passes »
Sadhika Malladi · Tianyu Gao · Eshaan Nichani · Jason Lee · Danqi Chen · Sanjeev Arora -
2023 : 🎤 Fine-Tuning Language Models with Just Forward Passes »
Sadhika Malladi · Tianyu Gao · Eshaan Nichani · Alex Damian · Jason Lee · Danqi Chen · Sanjeev Arora -
2023 : High-dimensional Optimization in the Age of ChatGPT, Sanjeev Arora »
Sanjeev Arora -
2023 Poster: Task-Specific Skill Localization in Fine-tuned Language Models »
Abhishek Panigrahi · Nikunj Saunshi · Haoyu Zhao · Sanjeev Arora -
2023 Poster: A Kernel-Based View of Language Model Fine-Tuning »
Sadhika Malladi · Alexander Wettig · Dingli Yu · Danqi Chen · Sanjeev Arora -
2023 Poster: Horizon-Free and Variance-Dependent Reinforcement Learning for Latent Markov Decision Processes »
Runlong Zhou · Ruosong Wang · Simon Du -
2022 : On the SDEs and Scaling Rules for Adaptive Gradient Algorithms »
Sadhika Malladi · Kaifeng Lyu · Abhishek Panigrahi · Sanjeev Arora -
2022 : Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence toMirror Descent »
Zhiyuan Li · Tianhao Wang · Jason Lee · Sanjeev Arora -
2022 Poster: Understanding Contrastive Learning Requires Incorporating Inductive Biases »
Nikunj Umesh Saunshi · Jordan Ash · Surbhi Goel · Dipendra Kumar Misra · Cyril Zhang · Sanjeev Arora · Sham Kakade · Akshay Krishnamurthy -
2022 Poster: Robust Training of Neural Networks Using Scale Invariant Architectures »
Zhiyuan Li · Srinadh Bhojanapalli · Manzil Zaheer · Sashank Jakkam Reddi · Sanjiv Kumar -
2022 Poster: More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize »
Alexander Wei · Wei Hu · Jacob Steinhardt -
2022 Spotlight: More Than a Toy: Random Matrix Models Predict How Real-World Neural Representations Generalize »
Alexander Wei · Wei Hu · Jacob Steinhardt -
2022 Spotlight: Understanding Contrastive Learning Requires Incorporating Inductive Biases »
Nikunj Umesh Saunshi · Jordan Ash · Surbhi Goel · Dipendra Kumar Misra · Cyril Zhang · Sanjeev Arora · Sham Kakade · Akshay Krishnamurthy -
2022 Oral: Robust Training of Neural Networks Using Scale Invariant Architectures »
Zhiyuan Li · Srinadh Bhojanapalli · Manzil Zaheer · Sashank Jakkam Reddi · Sanjiv Kumar -
2022 Poster: Understanding Gradient Descent on the Edge of Stability in Deep Learning »
Sanjeev Arora · Zhiyuan Li · Abhishek Panigrahi -
2022 Spotlight: Understanding Gradient Descent on the Edge of Stability in Deep Learning »
Sanjeev Arora · Zhiyuan Li · Abhishek Panigrahi -
2021 Poster: A Representation Learning Perspective on the Importance of Train-Validation Splitting in Meta-Learning »
Nikunj Umesh Saunshi · Arushi Gupta · Wei Hu -
2021 Spotlight: A Representation Learning Perspective on the Importance of Train-Validation Splitting in Meta-Learning »
Nikunj Umesh Saunshi · Arushi Gupta · Wei Hu -
2021 Poster: Near-Optimal Linear Regression under Distribution Shift »
Qi Lei · Wei Hu · Jason Lee -
2021 Spotlight: Near-Optimal Linear Regression under Distribution Shift »
Qi Lei · Wei Hu · Jason Lee -
2021 Poster: Bilinear Classes: A Structural Framework for Provable Generalization in RL »
Simon Du · Sham Kakade · Jason Lee · Shachar Lovett · Gaurav Mahajan · Wen Sun · Ruosong Wang -
2021 Poster: Instabilities of Offline RL with Pre-Trained Neural Representation »
Ruosong Wang · Yifan Wu · Ruslan Salakhutdinov · Sham Kakade -
2021 Poster: Risk Bounds and Rademacher Complexity in Batch Reinforcement Learning »
Yaqi Duan · Chi Jin · Zhiyuan Li -
2021 Spotlight: Instabilities of Offline RL with Pre-Trained Neural Representation »
Ruosong Wang · Yifan Wu · Ruslan Salakhutdinov · Sham Kakade -
2021 Spotlight: Risk Bounds and Rademacher Complexity in Batch Reinforcement Learning »
Yaqi Duan · Chi Jin · Zhiyuan Li -
2021 Oral: Bilinear Classes: A Structural Framework for Provable Generalization in RL »
Simon Du · Sham Kakade · Jason Lee · Shachar Lovett · Gaurav Mahajan · Wen Sun · Ruosong Wang -
2020 Poster: Provable Representation Learning for Imitation Learning via Bi-level Optimization »
Sanjeev Arora · Simon Du · Sham Kakade · Yuping Luo · Nikunj Umesh Saunshi -
2020 Poster: InstaHide: Instance-hiding Schemes for Private Distributed Learning »
Yangsibo Huang · Zhao Song · Kai Li · Sanjeev Arora -
2020 Poster: A Sample Complexity Separation between Non-Convex and Convex Meta-Learning »
Nikunj Umesh Saunshi · Yi Zhang · Mikhail Khodak · Sanjeev Arora -
2020 Poster: Nearly Linear Row Sampling Algorithm for Quantile Regression »
Yi Li · Ruosong Wang · Lin Yang · Hanrui Zhang -
2019 : Is Optimization a sufficient language to understand Deep Learning? »
Sanjeev Arora -
2019 Poster: Dimensionality Reduction for Tukey Regression »
Kenneth Clarkson · Ruosong Wang · David Woodruff -
2019 Poster: Width Provably Matters in Optimization for Deep Linear Neural Networks »
Simon Du · Wei Hu -
2019 Poster: A Theoretical Analysis of Contrastive Unsupervised Representation Learning »
Nikunj Umesh Saunshi · Orestis Plevrakis · Sanjeev Arora · Mikhail Khodak · Hrishikesh Khandeparkar -
2019 Oral: A Theoretical Analysis of Contrastive Unsupervised Representation Learning »
Nikunj Umesh Saunshi · Orestis Plevrakis · Sanjeev Arora · Mikhail Khodak · Hrishikesh Khandeparkar -
2019 Oral: Dimensionality Reduction for Tukey Regression »
Kenneth Clarkson · Ruosong Wang · David Woodruff -
2019 Oral: Width Provably Matters in Optimization for Deep Linear Neural Networks »
Simon Du · Wei Hu -
2019 Poster: Provably efficient RL with Rich Observations via Latent State Decoding »
Simon Du · Akshay Krishnamurthy · Nan Jiang · Alekh Agarwal · Miroslav Dudik · John Langford -
2019 Poster: Gradient Descent Finds Global Minima of Deep Neural Networks »
Simon Du · Jason Lee · Haochuan Li · Liwei Wang · Xiyu Zhai -
2019 Oral: Provably efficient RL with Rich Observations via Latent State Decoding »
Simon Du · Akshay Krishnamurthy · Nan Jiang · Alekh Agarwal · Miroslav Dudik · John Langford -
2019 Oral: Gradient Descent Finds Global Minima of Deep Neural Networks »
Simon Du · Jason Lee · Haochuan Li · Liwei Wang · Xiyu Zhai -
2018 Poster: On the Power of Over-parametrization in Neural Networks with Quadratic Activation »
Simon Du · Jason Lee -
2018 Poster: Fast and Sample Efficient Inductive Matrix Completion via Multi-Phase Procrustes Flow »
Xiao Zhang · Simon Du · Quanquan Gu -
2018 Oral: Fast and Sample Efficient Inductive Matrix Completion via Multi-Phase Procrustes Flow »
Xiao Zhang · Simon Du · Quanquan Gu -
2018 Oral: On the Power of Over-parametrization in Neural Networks with Quadratic Activation »
Simon Du · Jason Lee -
2018 Poster: Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima »
Simon Du · Jason Lee · Yuandong Tian · Aarti Singh · Barnabás Póczos -
2018 Poster: Discrete-Continuous Mixtures in Probabilistic Programming: Generalized Semantics and Inference Algorithms »
Yi Wu · Siddharth Srivastava · Nicholas Hay · Simon Du · Stuart Russell -
2018 Poster: Stronger Generalization Bounds for Deep Nets via a Compression Approach »
Sanjeev Arora · Rong Ge · Behnam Neyshabur · Yi Zhang -
2018 Oral: Stronger Generalization Bounds for Deep Nets via a Compression Approach »
Sanjeev Arora · Rong Ge · Behnam Neyshabur · Yi Zhang -
2018 Oral: Discrete-Continuous Mixtures in Probabilistic Programming: Generalized Semantics and Inference Algorithms »
Yi Wu · Siddharth Srivastava · Nicholas Hay · Simon Du · Stuart Russell -
2018 Oral: Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima »
Simon Du · Jason Lee · Yuandong Tian · Aarti Singh · Barnabás Póczos -
2018 Poster: On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization »
Sanjeev Arora · Nadav Cohen · Elad Hazan -
2018 Oral: On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization »
Sanjeev Arora · Nadav Cohen · Elad Hazan -
2018 Tutorial: Toward Theoretical Understanding of Deep Learning »
Sanjeev Arora -
2017 Poster: Stochastic Variance Reduction Methods for Policy Evaluation »
Simon Du · Jianshu Chen · Lihong Li · Lin Xiao · Dengyong Zhou -
2017 Talk: Stochastic Variance Reduction Methods for Policy Evaluation »
Simon Du · Jianshu Chen · Lihong Li · Lin Xiao · Dengyong Zhou -
2017 Poster: Generalization and Equilibrium in Generative Adversarial Nets (GANs) »
Sanjeev Arora · Rong Ge · Yingyu Liang · Tengyu Ma · Yi Zhang -
2017 Talk: Generalization and Equilibrium in Generative Adversarial Nets (GANs) »
Sanjeev Arora · Rong Ge · Yingyu Liang · Tengyu Ma · Yi Zhang