Timezone: »

 
ALASCA: Rethinking Label Smoothing for Deep Learning Under Label Noise
Jongwoo Ko · Bongsoo Yi · Se-Young Yun

As label noise, one of the most popular distribution shifts, severely degrades deep neural networks' generalization performance, robust training with noisy labels is becoming an important task in modern deep learning. In this paper, we propose our framework, coined as Adaptive LAbel smoothing on Sub-ClAssifier (ALASCA), that provides a robust feature extractor with theoretical guarantee and negligible additional computation. First, we derive that the label smoothing (LS) incurs implicit Lipschitz regularization (LR). Furthermore, based on these derivations, we apply the adaptive LS (ALS) on sub-classifiers architectures for the practical application of adaptive LR on intermediate layers. We conduct extensive experiments for ALASCA and combine it with previous noise-robust methods on several datasets and show our framework consistently outperforms corresponding baselines.

Author Information

Jongwoo Ko (KAIST)
Bongsoo Yi (University of North Carolina at Chapel Hill)
Se-Young Yun (KAIST)

More from the Same Authors