Timezone: »
Proper regularization is critical for speeding up training, improving generalization performance, and learning compact models that are cost efficient. We propose and analyze regularized gradient descent algorithms for learning shallow neural networks. Our framework is general and covers weight-sharing (convolutional networks), sparsity (network pruning), and low-rank constraints among others. We first introduce covering dimension to quantify the complexity of the constraint set and provide insights on the generalization properties. Then, we show that proposed algorithms become well-behaved and local linear convergence occurs once the amount of data exceeds the covering dimension. Overall, our results demonstrate that near-optimal sample complexity is sufficient for efficient learning and illustrate how regularization can be beneficial to learn over-parameterized networks.
Author Information
Samet Oymak (University of California, Riverside)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Learning Compact Neural Networks with Regularization »
Thu. Jul 12th 04:15 -- 07:00 PM Room Hall B #180
More from the Same Authors
-
2021 Poster: Generalization Guarantees for Neural Architecture Search with Train-Validation Split »
Samet Oymak · Mingchen Li · Mahdi Soltanolkotabi -
2021 Spotlight: Generalization Guarantees for Neural Architecture Search with Train-Validation Split »
Samet Oymak · Mingchen Li · Mahdi Soltanolkotabi -
2019 Poster: Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path? »
Samet Oymak · Mahdi Soltanolkotabi -
2019 Oral: Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path? »
Samet Oymak · Mahdi Soltanolkotabi