GEM-FI: Gated Evidential Mixtures with Fisher Modulation
Marco Mohammed ⋅ Fatemeh Daneshfar ⋅ Pietro Lió
Abstract
Evidential Deep Learning (EDL) enables single-pass uncertainty estimation by predicting Dirichlet evidence, but it can remain overconfident and poorly calibrated, and it often fails to represent multi-modal epistemic uncertainty. We introduce **G**ated **E**vidential **M**ixtures (**GEM**), a family of models that learns an in-model energy signal and uses it to gate evidential outputs end-to-end in a distance-aware manner. GEM-CORE learns a feature-level energy and maps it to a bounded gate that smoothly suppresses evidence when support is low. To capture epistemic multi-modality without multi-pass ensembling, GEM-MIX adds a lightweight mixture of evidential heads with learned routing weights while preserving single-pass inference. Finally, GEM-FI stabilizes mixture allocations via a Fisher-informed regularizer, reducing head collapse and producing smoother boundary uncertainty. Across image classification and OOD detection benchmarks, GEM improves calibration and ID/OOD separation with single-pass inference. On CIFAR-10, GEM-FI vs. DAEDL improves Acc. 93.75 to 91.11 (+2.64 pp), reduces Brier$\times$100 6.81 to 14.27 ($-7.46$), and also improves misclassification-detection (AUPR) 99.94 to 99.08 (+0.86). For epistemic OOD detection, GEM-FI achieves AUPR/AUROC of 92.59/95.09 on CIFAR-10$\rightarrow$SVHN and 90.20/89.06 on CIFAR-10$\rightarrow$CIFAR-100 (vs. 85.54/89.30 and 88.19/86.10 for DAEDL).
Successful Page Load