We present algorithms for efficiently learning regularizers that improve generalization. Our approach is based on the insight that regularizers can be viewed as upper bounds on the generalization gap, and that reducing the slack in the bound can improve performance on test data. For a broad class of regularizers, the hyperparameters that give the best upper bound can be computed using linear programming. Under certain Bayesian assumptions, solving the LP lets us "jump" to the optimal hyperparameters given very limited data. This suggests a natural algorithm for tuning regularization hyperparameters, which we show to be effective on both real and synthetic data.
Matthew Streeter (Google)
Related Events (a corresponding poster, oral, or spotlight)
2019 Poster: Learning Optimal Linear Regularizers »
Wed. Jun 12th 01:30 -- 04:00 AM Room Pacific Ballroom #134
More from the Same Authors
2018 Poster: Approximation Algorithms for Cascading Prediction Models »
2018 Oral: Approximation Algorithms for Cascading Prediction Models »