Front-Loaded Robust Conformal Prediction: Heavy Calibration, Minimal Test-Time Cost
Abstract
Robust conformal prediction (RCP) addresses confidence miscalibration in machine learning models by producing prediction sets with guaranteed coverage — these sets are guaranteed to include the true label with a user-specified high probability, even under worst-case noise. Recent works use randomized smoothing, as it provides robustness for black-box models at larger radii. Currently, there exist two setups for smoothing-based RCP: one requires extensive Monte Carlo sampling at calibration and test time but results in smaller prediction sets; the other setup produces larger prediction sets but uses a single sample at both stages. Since calibration is a one-time preprocessing step, it can accommodate substantially higher computational overhead than inference. Inspired by that, we offer procedures in between: we increase the sample rate at calibration time while using only one or few samples at test time. Increased calibration-time sampling can reduce the size of the prediction sets. With a large enough test set (which is often the case in production), our Front-Loaded RCPs have the same computational complexity as the state of the art, while producing considerably smaller sets at larger radii.