AES: Curing Optimizer Blindness in Long-Tailed Recognition via State-Aware Correction
Abstract
Long-tailed recognition fundamentally suffers from optimizer blindness where the optimization process mistakenly conflates the magnitude of gradient accumulation with the scarcity of semantic information. Existing strategies relying on static frequency-based priors fail to correct this bias and result in state blindness regarding supervision and micro-level blindness regarding parameter updates. To address these limitations, we propose the AES framework to establish a dynamic and state-aware correction system across the entire learning lifecycle. We specifically introduce Adaptive Residual Supervision loss to act as a real-time reality check for supervision completeness via precision shielding. We also propose Entropy-aware PCGrad to resolve parameter-level conflicts by quantifying task specificity through gradient entropy. Additionally, we devise Sample-level Conflict Arbitrated Fusion to serve as a dynamic inference arbiter that routes predictions based on instance difficulty. Extensive experiments on CIFAR-100-LT, ImageNet-LT, and iNaturalist 2018 demonstrate that our method consistently achieves state-of-the-art performance by effectively balancing head-class stability and tail-class discrimination. Code is available at Supplement.