Skip to yearly menu bar Skip to main content


Poster

Improving generalization by controlling label-noise information in neural network weights

Hrayr Harutyunyan · Kyle Reing · Greg Ver Steeg · Aram Galstyan

Keywords: [ Information Theory and Estimation ] [ Robust Statistics and Machine Learning ] [ Supervised Learning ] [ Algorithms ]


Abstract: In the presence of noisy or incorrect labels, neural networks have the undesirable tendency to memorize information about the noise. Standard regularization techniques such as dropout, weight decay or data augmentation sometimes help, but do not prevent this behavior. If one considers neural network weights as random variables that depend on the data and stochasticity of training, the amount of memorized information can be quantified with the Shannon mutual information between weights and the vector of all training labels given inputs, $I(w; \Y \mid \X)$. We show that for any training algorithm, low values of this term correspond to reduction in memorization of label-noise and better generalization bounds. To obtain these low values, we propose training algorithms that employ an auxiliary network that predicts gradients in the final layers of a classifier without accessing labels. We illustrate the effectiveness of our approach on versions of MNIST, CIFAR-10, and CIFAR-100 corrupted with various noise models, and on a large-scale dataset Clothing1M that has noisy labels.

Chat is not available.