Label-Guided Representation Learning for Incomplete Multi-View Multi-Label Classification
Abstract
Incomplete multi-view multi-label classification addresses scenarios where views and labels are partially missing. While existing methods treat labels solely as supervision signals, they overlook the semantic structure inherent in partial annotations. We propose Label-Guided Representation Learning (LGRL) that systematically exploits label semantics as structural priors throughout learning. Our framework constructs a semantic-informed mixture prior via learnable category prototypes to guide representation extraction, and introduces category-specific conditional posteriors where prototypes act as Bayesian experts for multi-view fusion. We further derive a principled label-driven information bottleneck objective balancing reconstruction sufficiency with cross-view consistency, enabling category-conditional reasoning. Extensive experimental results demonstrate the effectiveness of LGRL across benchmark datasets as well as real-world applications in sports analytics and medical imaging.