Timezone: »
Deep nets generalize well despite having more parameters than the number of training samples. Recent works try to give an explanation using PAC-Bayes and Margin-based analyses, but do not as yet result in sample complexity bounds better than naive parameter counting. The current paper shows generalization bounds that are orders of magnitude better in practice. These rely upon new succinct reparametrizations of the trained net --- a compression that is explicit and efficient. These yield generalization bounds via a simple compression-based framework introduced here. Our results also provide some theoretical justification for widespread empirical success in compressing deep nets. Analysis of correctness of our compression relies upon some newly identified noise stability properties of trained deep nets, which are also experimentally verified. The study of these properties and resulting generalization bounds are also extended to convolutional nets, which had eluded earlier attempts on proving generalization.
Author Information
Sanjeev Arora ( Princeton University and Institute for Advanced Study)
Rong Ge (Duke University)
Behnam Neyshabur (New York University)
Yi Zhang (Princeton University)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Stronger Generalization Bounds for Deep Nets via a Compression Approach »
Thu. Jul 12th 04:15 -- 07:00 PM Room Hall B #184
More from the Same Authors
-
2023 Poster: Task-Specific Skill Localization in Fine-tuned Language Models »
Abhishek Panigrahi · Nikunj Saunshi · Haoyu Zhao · Sanjeev Arora -
2023 Poster: Provably Learning Diverse Features in Multi-View Data with Midpoint Mixup »
Muthu Chidambaram · Xiang Wang · Chenwei Wu · Rong Ge -
2023 Poster: Hiding Data Helps: On the Benefits of Masking for Sparse Coding »
Muthu Chidambaram · Chenwei Wu · Yu Cheng · Rong Ge -
2023 Poster: A Kernel-Based View of Language Model Fine-Tuning »
Sadhika Malladi · Alexander Wettig · Dingli Yu · Danqi Chen · Sanjeev Arora -
2023 Poster: Implicit Regularization Leads to Benign Overfitting for Sparse Linear Regression »
Mo Zhou · Rong Ge -
2022 : On the SDEs and Scaling Rules for Adaptive Gradient Algorithms »
Sadhika Malladi · Kaifeng Lyu · Abhishek Panigrahi · Sanjeev Arora -
2022 : Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence toMirror Descent »
Zhiyuan Li · Tianhao Wang · Jason Lee · Sanjeev Arora -
2022 Poster: Online Algorithms with Multiple Predictions »
Keerti Anand · Rong Ge · Amit Kumar · Debmalya Panigrahi -
2022 Spotlight: Online Algorithms with Multiple Predictions »
Keerti Anand · Rong Ge · Amit Kumar · Debmalya Panigrahi -
2022 Poster: Extracting Latent State Representations with Linear Dynamics from Rich Observations »
Abraham Frandsen · Rong Ge · Holden Lee -
2022 Poster: Understanding Contrastive Learning Requires Incorporating Inductive Biases »
Nikunj Umesh Saunshi · Jordan Ash · Surbhi Goel · Dipendra Kumar Misra · Cyril Zhang · Sanjeev Arora · Sham Kakade · Akshay Krishnamurthy -
2022 Spotlight: Understanding Contrastive Learning Requires Incorporating Inductive Biases »
Nikunj Umesh Saunshi · Jordan Ash · Surbhi Goel · Dipendra Kumar Misra · Cyril Zhang · Sanjeev Arora · Sham Kakade · Akshay Krishnamurthy -
2022 Spotlight: Extracting Latent State Representations with Linear Dynamics from Rich Observations »
Abraham Frandsen · Rong Ge · Holden Lee -
2022 Poster: Understanding Gradient Descent on the Edge of Stability in Deep Learning »
Sanjeev Arora · Zhiyuan Li · Abhishek Panigrahi -
2022 Spotlight: Understanding Gradient Descent on the Edge of Stability in Deep Learning »
Sanjeev Arora · Zhiyuan Li · Abhishek Panigrahi -
2021 Poster: Guarantees for Tuning the Step Size using a Learning-to-Learn Approach »
Xiang Wang · Shuai Yuan · Chenwei Wu · Rong Ge -
2021 Spotlight: Guarantees for Tuning the Step Size using a Learning-to-Learn Approach »
Xiang Wang · Shuai Yuan · Chenwei Wu · Rong Ge -
2020 Poster: High-dimensional Robust Mean Estimation via Gradient Descent »
Yu Cheng · Ilias Diakonikolas · Rong Ge · Mahdi Soltanolkotabi -
2020 Poster: Provable Representation Learning for Imitation Learning via Bi-level Optimization »
Sanjeev Arora · Simon Du · Sham Kakade · Yuping Luo · Nikunj Umesh Saunshi -
2020 Poster: InstaHide: Instance-hiding Schemes for Private Distributed Learning »
Yangsibo Huang · Zhao Song · Kai Li · Sanjeev Arora -
2020 Poster: A Sample Complexity Separation between Non-Convex and Convex Meta-Learning »
Nikunj Umesh Saunshi · Yi Zhang · Mikhail Khodak · Sanjeev Arora -
2020 Poster: Customizing ML Predictions for Online Algorithms »
Keerti Anand · Rong Ge · Debmalya Panigrahi -
2019 : Is Optimization a sufficient language to understand Deep Learning? »
Sanjeev Arora -
2019 Poster: A Theoretical Analysis of Contrastive Unsupervised Representation Learning »
Nikunj Umesh Saunshi · Orestis Plevrakis · Sanjeev Arora · Mikhail Khodak · Hrishikesh Khandeparkar -
2019 Poster: Efficient Full-Matrix Adaptive Regularization »
Naman Agarwal · Brian Bullins · Xinyi Chen · Elad Hazan · Karan Singh · Cyril Zhang · Yi Zhang -
2019 Oral: A Theoretical Analysis of Contrastive Unsupervised Representation Learning »
Nikunj Umesh Saunshi · Orestis Plevrakis · Sanjeev Arora · Mikhail Khodak · Hrishikesh Khandeparkar -
2019 Oral: Efficient Full-Matrix Adaptive Regularization »
Naman Agarwal · Brian Bullins · Xinyi Chen · Elad Hazan · Karan Singh · Cyril Zhang · Yi Zhang -
2019 Poster: Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks »
Sanjeev Arora · Simon Du · Wei Hu · Zhiyuan Li · Ruosong Wang -
2019 Oral: Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks »
Sanjeev Arora · Simon Du · Wei Hu · Zhiyuan Li · Ruosong Wang -
2018 Poster: Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator »
Maryam Fazel · Rong Ge · Sham Kakade · Mehran Mesbahi -
2018 Oral: Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator »
Maryam Fazel · Rong Ge · Sham Kakade · Mehran Mesbahi -
2018 Poster: On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization »
Sanjeev Arora · Nadav Cohen · Elad Hazan -
2018 Oral: On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization »
Sanjeev Arora · Nadav Cohen · Elad Hazan -
2018 Tutorial: Toward Theoretical Understanding of Deep Learning »
Sanjeev Arora -
2017 Poster: How to Escape Saddle Points Efficiently »
Chi Jin · Rong Ge · Praneeth Netrapalli · Sham Kakade · Michael Jordan -
2017 Talk: How to Escape Saddle Points Efficiently »
Chi Jin · Rong Ge · Praneeth Netrapalli · Sham Kakade · Michael Jordan -
2017 Poster: No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis »
Rong Ge · Chi Jin · Yi Zheng -
2017 Poster: Generalization and Equilibrium in Generative Adversarial Nets (GANs) »
Sanjeev Arora · Rong Ge · Yingyu Liang · Tengyu Ma · Yi Zhang -
2017 Talk: No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis »
Rong Ge · Chi Jin · Yi Zheng -
2017 Talk: Generalization and Equilibrium in Generative Adversarial Nets (GANs) »
Sanjeev Arora · Rong Ge · Yingyu Liang · Tengyu Ma · Yi Zhang