Timezone: »
As label noise, one of the most popular distribution shifts, severely degrades deep neural networks' generalization performance, robust training with noisy labels is becoming an important task in modern deep learning. In this paper, we propose our framework, coined as Adaptive LAbel smoothing on Sub-ClAssifier (ALASCA), that provides a robust feature extractor with theoretical guarantee and negligible additional computation. First, we derive that the label smoothing (LS) incurs implicit Lipschitz regularization (LR). Furthermore, based on these derivations, we apply the adaptive LS (ALS) on sub-classifiers architectures for the practical application of adaptive LR on intermediate layers. We conduct extensive experiments for ALASCA and combine it with previous noise-robust methods on several datasets and show our framework consistently outperforms corresponding baselines.
Author Information
Jongwoo Ko (KAIST)
Bongsoo Yi (University of North Carolina at Chapel Hill)
Se-Young Yun (KAIST)
More from the Same Authors
-
2022 : Revisiting Architecture-aware Knowledge Distillation: Smaller Models and Faster Search »
Taehyeon Kim · Heesoo Myeong · Se-Young Yun -
2022 : Supernet Training for Federated Image Classification »
Taehyeon Kim · Se-Young Yun -
2022 Poster: Rotting Infinitely Many-Armed Bandits »
Jung-hun Kim · Milan Vojnovic · Se-Young Yun -
2022 Spotlight: Rotting Infinitely Many-Armed Bandits »
Jung-hun Kim · Milan Vojnovic · Se-Young Yun -
2021 Poster: Improved Regret Bounds of Bilinear Bandits using Action Space Analysis »
Kyoungseok Jang · Kwang-Sung Jun · Se-Young Yun · Wanmo Kang -
2021 Spotlight: Improved Regret Bounds of Bilinear Bandits using Action Space Analysis »
Kyoungseok Jang · Kwang-Sung Jun · Se-Young Yun · Wanmo Kang -
2019 Poster: Spectral Approximate Inference »
Sejun Park · Eunho Yang · Se-Young Yun · Jinwoo Shin -
2019 Oral: Spectral Approximate Inference »
Sejun Park · Eunho Yang · Se-Young Yun · Jinwoo Shin