Skip to yearly menu bar Skip to main content


Spotlight

Implicit Bias of the Step Size in Linear Diagonal Neural Networks

Mor Shpigel Nacson · Kavya Ravichandran · Nati Srebro · Daniel Soudry

Ballroom 1 & 2
[ ] [ Livestream: Visit Deep Learning ]

Abstract: Focusing on diagonal linear networks as a model for understanding the implicit bias in underdetermined models, we show how the gradient descent step size can have a large qualitative effect on the implicit bias, and thus on generalization ability. In particular, we show how using large step size for non-centered data can change the implicit bias from a "kernel" type behavior to a "rich" (sparsity-inducing) regime --- even when gradient flow, studied in previous works, would not escape the "kernel" regime. We do so by using dynamic stability, proving that convergence to dynamically stable global minima entails a bound on some weighted $\ell_1$-norm of the linear predictor, i.e. a "rich" regime. We prove this leads to good generalization in a sparse regression setting.

Chat is not available.