Beyond Magnitude: Scale-Invariant Evidential Fusion for Multi-View Classification
Abstract
Evidential Deep Learning (EDL) enables trustworthy multi-view classification, yet suffers from a critical vulnerability: the Scale Mismatch Problem. We theoretically demonstrate that existing evidential fusion rules erroneously equate logit magnitude with semantic confidence, rendering them susceptible to semantic hijacking by inflated but uninformative views. To resolve this, we propose Scale-Invariant Evidential Fusion (SAEF), a framework utilizing instance-wise standardization to strictly decouple confidence from scale. Instead of relying on magnitude dominance, SAEF aggregates views based on statistical consensus. Theoretically, SAEF guarantees invariance to global scaling and robustness to asymmetric dominance. Experiments on four diverse datasets confirm that SAEF outperforms state-of-the-art baselines in accuracy and robustness to semantic conflicts and noise, ensuring stability against severe scale perturbations.