Skip to yearly menu bar Skip to main content


Poster

Not all distributional shifts are equal: Fine-grained robust conformal inference

Jiahao Ai · Zhimei Ren


Abstract: We introduce a fine-grained framework for uncertain quantification of predictive models in the presence of distributional shifts. This framework distinguishes between the shift in covariate distributions and that of the conditional relationship between the outcome ($Y$) and the covariate ($X$), prescribing a corresponding treatment for each type. Since the covariate shift is often identifiable but the conditional distributional shift is not, we propose to reweight the training samples according to the covariate shift while defending against the worst-case conditional distribution shift bounded in an $f$-divergence ball. Based on ideas from conformal inference and distributionally robust learning, we present an algorithm that outputs (approximately) valid and efficient prediction sets in the presence of distributional shifts. As a special use case, we show that the framework can be applied to sensitivity analysis of individual treatment effects under hidden confounding. The proposed methods are evaluated in the simulation studies and real data applications, demonstrating superior robustness and efficiency compared with existing benchmarks.

Live content is unavailable. Log in and register to view live content