Owing to the extremely high expressive power of deep neural networks, their side effect is to totally memorize training data even when the labels are extremely noisy. To overcome overfitting on the noisy labels, we propose a novel robust training method called SELFIE. Our key idea is to selectively refurbish and exploit unclean samples that can be corrected with high precision, thereby gradually increasing the number of available training samples. Taking advantage of this design, SELFIE effectively prevents the risk of noise accumulation from the false correction and fully exploits the training data. To validate the superiority of SELFIE, we conducted extensive experimentation using three data sets simulated with varying noise rates. The result showed that SELFIE remarkably improved absolute test error by up to 10.5 percentage points compared with two state-of-the-art robust training methods.
Hwanjun Song (KAIST)
Minseok Kim (KAIST)
Jae-Gil Lee (KAIST)
Related Events (a corresponding poster, oral, or spotlight)
2019 Poster: SELFIE: Refurbishing Unclean Samples for Robust Deep Learning »
Wed Jun 12th 06:30 -- 09:00 PM Room Pacific Ballroom