From Individual Calibration to Reliable Classifiers: ALD Parameterization with mPAIC Guarantees
Abstract
Modern neural classifiers can achieve remarkable predictive performance, yet often suffer from miscalibration. In this paper, we introduce a unified calibration framework applicable to arbitrary distribution-based classifiers. The proposed calibration objective guarantees a monotone Probably Approximately Individually Calibrated (mPAIC) predictor, which theoretically implies the properties of a Probably Approximately Calibrated Classifier (PACC) with explicit error bounds. To enable stable and effective optimization, we further devise a Decoupled Dual-Stream Optimization (DDSO) strategy with gradient detachment to reconcile discriminative representation learning and continuous calibration. Notably, our framework bridges calibration paradigms, supporting flexible deployment either as an end-to-end pre-calibration objective or as a lightweight post-calibration adapter. Extensive experiments across nine real-world datasets demonstrate that our approach consistently outperforms strong baselines, achieving superior performance on both accuracy and multi-level calibration.