Timezone: »
Proper regularization is critical for speeding up training, improving generalization performance, and learning compact models that are cost efficient. We propose and analyze regularized gradient descent algorithms for learning shallow neural networks. Our framework is general and covers weight-sharing (convolutional networks), sparsity (network pruning), and low-rank constraints among others. We first introduce covering dimension to quantify the complexity of the constraint set and provide insights on the generalization properties. Then, we show that proposed algorithms become well-behaved and local linear convergence occurs once the amount of data exceeds the covering dimension. Overall, our results demonstrate that near-optimal sample complexity is sufficient for efficient learning and illustrate how regularization can be beneficial to learn over-parameterized networks.
Author Information
Samet Oymak (University of California, Riverside)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Learning Compact Neural Networks with Regularization »
Thu Jul 12th 03:20 -- 03:30 PM Room A9
More from the Same Authors
-
2019 Poster: Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path? »
Samet Oymak · Mahdi Soltanolkotabi -
2019 Oral: Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path? »
Samet Oymak · Mahdi Soltanolkotabi