Timezone: »
This paper proves that robustness implies generalization via data-dependent generalization bounds. As a result, robustness and generalization are shown to be connected closely in a data-dependent manner. Our bounds improve previous bounds in two directions, to solve an open problem that has seen little development since 2010. The first is to reduce the dependence on the covering number. The second is to remove the dependence on the hypothesis space. We present several examples, including ones for lasso and deep learning, in which our bounds are provably preferable. The experiments on real-world data and theoretical models demonstrate near-exponential improvements in various situations. To achieve these improvements, we do not require additional assumptions on the unknown distribution; instead, we only incorporate an observable and computable property of the training samples. A key technical innovation is an improved concentration bound for multinomial random variables that is of independent interest beyond robustness and generalization.
Author Information
Kenji Kawaguchi (National University of Singapore)
Zhun Deng (Harvard)
Kyle Luh (Harvard Unversity)
Jiaoyang Huang (IAS)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Robustness Implies Generalization via Data-Dependent Generalization Bounds »
Tue. Jul 19th through Wed the 20th Room Hall E #1125
More from the Same Authors
-
2023 Poster: Auxiliary Learning as an Asymmetric Bargaining Game »
Aviv Shamsian · Aviv Navon · Neta Glazer · Kenji Kawaguchi · Gal Chechik · Ethan Fetaya -
2023 Poster: GFlowOut: Dropout with Generative Flow Networks »
Dianbo Liu · Moksh Jain · Bonaventure F. P. Dossou · Qianli Shen · Salem Lahlou · Anirudh Goyal · Nikolay Malkin · Chris Emezue · Dinghuai Zhang · Nadhir Hassen · Xu Ji · Kenji Kawaguchi · Yoshua Bengio -
2023 Poster: Discrete Key-Value Bottleneck »
Frederik Träuble · Anirudh Goyal · Nasim Rahaman · Michael Mozer · Kenji Kawaguchi · Yoshua Bengio · Bernhard Schölkopf -
2023 Poster: Scalable Set Encoding with Universal Mini-Batch Consistency and Unbiased Full Set Gradient Approximation »
Jeffrey Willette · Seanie Lee · Bruno Andreis · Kenji Kawaguchi · Juho Lee · Sung Ju Hwang -
2023 Poster: How Does Information Bottleneck Help Deep Learning? »
Kenji Kawaguchi · Zhun Deng · Xu Ji · Jiaoyang Huang -
2022 Poster: When and How Mixup Improves Calibration »
Linjun Zhang · Zhun Deng · Kenji Kawaguchi · James Zou -
2022 Spotlight: When and How Mixup Improves Calibration »
Linjun Zhang · Zhun Deng · Kenji Kawaguchi · James Zou -
2022 Poster: Multi-Task Learning as a Bargaining Game »
Aviv Navon · Aviv Shamsian · Idan Achituve · Haggai Maron · Kenji Kawaguchi · Gal Chechik · Ethan Fetaya -
2022 Spotlight: Multi-Task Learning as a Bargaining Game »
Aviv Navon · Aviv Shamsian · Idan Achituve · Haggai Maron · Kenji Kawaguchi · Gal Chechik · Ethan Fetaya -
2021 Poster: Toward Better Generalization Bounds with Locally Elastic Stability »
Zhun Deng · Hangfeng He · Weijie Su -
2021 Spotlight: Toward Better Generalization Bounds with Locally Elastic Stability »
Zhun Deng · Hangfeng He · Weijie Su -
2021 Poster: Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth »
Keyulu Xu · Mozhi Zhang · Stefanie Jegelka · Kenji Kawaguchi -
2021 Spotlight: Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth »
Keyulu Xu · Mozhi Zhang · Stefanie Jegelka · Kenji Kawaguchi -
2020 Poster: Interpreting Robust Optimization via Adversarial Influence Functions »
Zhun Deng · Cynthia Dwork · Jialiang Wang · Linjun Zhang -
2020 Poster: Dynamics of Deep Neural Networks and Neural Tangent Hierarchy »
Jiaoyang Huang · Horng-Tzer Yau -
2020 Poster: Towards Understanding the Dynamics of the First-Order Adversaries »
Zhun Deng · Hangfeng He · Jiaoyang Huang · Weijie Su