We examine regularized linear models on small data sets where the directions of features are known. We find that traditional regularizers, such as ridge regression and the Lasso, induce unnecessarily high bias in order to reduce variance. We propose an alternative regularizer that penalizes the differences between the weights assigned to the features. This model often finds a better bias-variance tradeoff than its competitors in supervised learning problems. We also give an example of its use within reinforcement learning, when learning to play the game of Tetris.
Jan Malte Lichtenberg (University of Bath)
Ozgur Simsek (University of Bath)
Related Events (a corresponding poster, oral, or spotlight)
2019 Poster: Regularization in directable environments with application to Tetris »
Tue Jun 11th 06:30 -- 09:00 PM Room Pacific Ballroom