Skip to yearly menu bar Skip to main content


Learning Optimal Linear Regularizers

Matthew Streeter

Pacific Ballroom #134

Keywords: [ Unsupervised Learning ] [ Supervised Learning ] [ Statistical Learning Theory ]


We present algorithms for efficiently learning regularizers that improve generalization. Our approach is based on the insight that regularizers can be viewed as upper bounds on the generalization gap, and that reducing the slack in the bound can improve performance on test data. For a broad class of regularizers, the hyperparameters that give the best upper bound can be computed using linear programming. Under certain Bayesian assumptions, solving the LP lets us "jump" to the optimal hyperparameters given very limited data. This suggests a natural algorithm for tuning regularization hyperparameters, which we show to be effective on both real and synthetic data.

Live content is unavailable. Log in and register to view live content