Skip to yearly menu bar Skip to main content


Oral

Regularization in directable environments with application to Tetris

Jan Malte Lichtenberg · Ozgur Simsek

Abstract:

We examine regularized linear models on small data sets where the directions of features are known. We find that traditional regularizers, such as ridge regression and the Lasso, induce unnecessarily high bias in order to reduce variance. We propose an alternative regularizer that penalizes the differences between the weights assigned to the features. This model often finds a better bias-variance tradeoff than its competitors in supervised learning problems. We also give an example of its use within reinforcement learning, when learning to play the game of Tetris.

Chat is not available.