Timezone: »
Understanding generalization and estimation error of estimators for simple models such as linear and generalized linear models has attracted a lot of attention recently. This is in part due to an interesting observation made in machine learning community that highly over-parameterized neural networks achieve zero training error, and yet they are able to generalize well over the test samples. This phenomenon is captured by the so called double descent curve, where the generalization error starts decreasing again after the interpolation threshold. A series of recent works tried to explain such phenomenon for simple models. In this work, we analyze the asymptotics of estimation error in ridge estimators for convolutional linear models. These convolutional inverse problems, also known as deconvolution, naturally arise in different fields such as seismology, imaging, and acoustics among others. Our results hold for a large class of input distributions that include i.i.d. features as a special case. We derive exact formulae for estimation error of ridge estimators that hold in a certain high-dimensional regime. We show the double descent phenomenon in our experiments for convolutional models and show that our theoretical results match the experiments.
Author Information
Mojtaba Sahraee-Ardakan (UCLA)
Tung Mai (Adobe Research)
Anup Rao (Adobe Research)
Ryan A. Rossi (Adobe Research)
Sundeep Rangan (NYU)
Alyson Fletcher (UCLA)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Asymptotics of Ridge Regression in Convolutional Models »
Thu. Jul 22nd 12:45 -- 12:50 AM Room
More from the Same Authors
-
2021 : Coresets for Classification – Simplified and Strengthened »
Anup Rao · Tung Mai · Cameron Musco -
2022 Poster: One-Pass Algorithms for MAP Inference of Nonsymmetric Determinantal Point Processes »
Aravind Reddy · Ryan A. Rossi · Zhao Song · Anup Rao · Tung Mai · Nedim Lipka · Gang Wu · Eunyee Koh · Nesreen K Ahmed -
2022 Poster: Online Balanced Experimental Design »
David Arbour · Drew Dimmery · Tung Mai · Anup Rao -
2022 Spotlight: Online Balanced Experimental Design »
David Arbour · Drew Dimmery · Tung Mai · Anup Rao -
2022 Spotlight: One-Pass Algorithms for MAP Inference of Nonsymmetric Determinantal Point Processes »
Aravind Reddy · Ryan A. Rossi · Zhao Song · Anup Rao · Tung Mai · Nedim Lipka · Gang Wu · Eunyee Koh · Nesreen K Ahmed -
2021 : Coresets for Classification – Simplified and Strengthened »
Tung Mai · Anup Rao · Cameron Musco -
2021 Poster: Implicit Bias of Linear RNNs »
Melikasadat Emami · Mojtaba Sahraee-Ardakan · Parthe Pandit · Sundeep Rangan · Alyson Fletcher -
2021 Spotlight: Implicit Bias of Linear RNNs »
Melikasadat Emami · Mojtaba Sahraee-Ardakan · Parthe Pandit · Sundeep Rangan · Alyson Fletcher -
2021 Poster: Fundamental Tradeoffs in Distributionally Adversarial Training »
Mohammad Mehrabi · Adel Javanmard · Ryan A. Rossi · Anup Rao · Tung Mai -
2021 Spotlight: Fundamental Tradeoffs in Distributionally Adversarial Training »
Mohammad Mehrabi · Adel Javanmard · Ryan A. Rossi · Anup Rao · Tung Mai -
2020 Poster: Generalization Error of Generalized Linear Models in High Dimensions »
Melikasadat Emami · Mojtaba Sahraee-Ardakan · Parthe Pandit · Sundeep Rangan · Alyson Fletcher -
2020 Poster: Structured Policy Iteration for Linear Quadratic Regulator »
Youngsuk Park · Ryan A. Rossi · Zheng Wen · Gang Wu · Handong Zhao