Poster
Unsupervised Label Noise Modeling and Loss Correction
Eric Arazo · Diego Ortego · Paul Albert · Noel O'Connor · Kevin McGuinness

Thu Jun 13th 06:30 -- 09:00 PM @ Pacific Ballroom #176

Despite being robust to small amounts of label noise, convolutional neural networks trained with stochastic gradient methods have been shown to easily fit random labels. When there are a mixture of correct and mislabelled targets, networks tend to fit the former before the latter. This suggests using a suitable two-component mixture model as an unsupervised generative model of sample loss values during training to allow online estimation of the probability that a sample is mislabelled. Specifically, we propose a beta mixture to estimate this probability and correct the loss by relying on the network prediction (the so-called bootstrapping loss). We further adapt mixup augmentation to drive our approach a step further. Experiments on CIFAR-10/100 and TinyImageNet demonstrate a robustness to label noise that substantially outperforms recent state-of-the-art. Source code is available at https://git.io/fjsvE and Appendix at https://arxiv.org/abs/1904.11238.

Author Information

Eric Arazo (Insight Centre for Data Analytics (DCU))
Diego Ortego (Insight Centre for Data Analytics (DCU))
Paul Albert (Insight Centre for Data Analytics (DCU))
Noel O'Connor (Dublin City University (DCU))
Kevin McGuinness (Insight Centre for Data Analytics)

Related Events (a corresponding poster, oral, or spotlight)