Regularized Discriminative Alignment for Deep Representations under Label Shift
Abstract
Label shift refers to the distribution shift scenario where the marginal label distribution changes while the class-conditional distribution remains invariant. To address this challenge in complex real-world settings, we propose Regularized Discriminative Alignment for Label Shift (RDALS), a novel framework that adapts to target domains by aligning distributions within the deep latent space. By shifting the focus from raw inputs to learned representations, RDALS effectively operates under a weaker and more practical invariance assumption. Specifically, we construct a moment-matching linear system using Linear Discriminant Analysis (LDA) and show that this choice maximizes numerical stability. We further provide rigorous theoretical analysis, establishing finite-sample error bounds for the importance weight estimation and the generalization bounds for the adapted classifier. Extensive experiments on standard benchmarks demonstrate that RDALS significantly outperforms state-of-the-art baselines, achieving superior robustness and accuracy in both data-scarce and extreme-shift regimes.