Skip to yearly menu bar Skip to main content


Poster

Learning Label Shift Correction for Test-Agnostic Long-Tailed Recognition

Tong Wei · Zhen Mao · Zi-Hao Zhou · Yuanyu Wan · Min-Ling Zhang

Hall C 4-9 #2203
[ ] [ Project Page ] [ Paper PDF ]
[ Slides
Tue 23 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

Long-tail learning primarily focuses on mitigating the label distribution shift between long-tailed training data and uniformly distributed test data. However, in real-world applications, we often encounter a more intricate challenge where the test label distribution is agnostic. To address this problem, we first theoretically establish the substantial potential for reducing the generalization error if we can precisely estimate the test label distribution. Motivated by the theoretical insight, we introduce a simple yet effective solution called label shift correction (LSC). LSC estimates the test label distribution within the proposed framework of generalized black box shift estimation, and adjusts the predictions from a pre-trained model to align with the test distribution. Theoretical analyses confirm that accurate estimation of test label distribution can effectively reduce the generalization error. Extensive experimental results demonstrate that our method significantly outperforms previous state-of-the-art approaches, especially when confronted with non-uniform test label distribution. Notably, the proposed method is general and complements existing long-tail learning approaches, consistently improving their performance. The source code is available at https://github.com/Stomach-ache/label-shift-correction

Chat is not available.