Timezone: »
While improving prediction accuracy has been the focus of machine learning in recent years, this alone does not suffice for reliable decisionmaking. Deploying learning systems in consequential settings also requires calibrating and communicating the uncertainty of predictions. A recent line of work we call distributionfree predictive inference (i.e., conformal prediction and related methods) has developed a set of methods that give finitesample statistical guarantees for any (possibly incorrectly specified) predictive model and any (unknown) underlying distribution of the data, ensuring reliable uncertainty quantification (UQ) for many prediction tasks. This line of work represents a promising new approach to UQ with complex prediction systems but is relatively unknown in the applied machine learning community. Moreover, much remains to be done integrating distributionfree methods with existing approaches to modern machine learning in computer vision, natural language, reinforcement learning, and so on  little work has been done to bridge these two worlds. To facilitate the emerging topics on distributionfree methods, the proposed workshop has two goals. First, to bring together researchers in distributionfree methods with researchers specializing in applications of machine learning to catalyze work at this interface. Second, to bring together the existing community of distributionfree uncertainty quantification research, as no other workshop like this exists at a major conference. Given the important recent emphasis on the reliable realworld performance of ML models, we believe a large fraction of ICML attendees will find this workshop highly relevant.
Sat 6:20 a.m.  6:30 a.m.

Opening Remarks
(Live Talk)
SlidesLive Video » 
🔗 
Sat 6:30 a.m.  7:15 a.m.

Michael I. Jordan: Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control
(Live Talk)
SlidesLive Video » Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control We introduce a framework for calibrating machine learning models so that their predictions satisfy explicit, finitesample statistical guarantees. Our calibration algorithm works with any underlying model and (unknown) datagenerating distribution and does not require model refitting. The framework addresses, among other examples, false discovery rate control in multilabel classification, intersectionoverunion control in instance segmentation, and the simultaneous control of the type1 error of outlier detection and confidence set coverage in classification or regression. Our main insight is to reframe the riskcontrol problem as multiple hypothesis testing, enabling techniques and mathematical arguments different from those in the previous literature. We use our framework to provide new calibration methods for several core machine learning tasks with detailed worked examples in computer vision and tabular medical data. 
Michael Jordan 🔗 
Sat 7:15 a.m.  8:00 a.m.

Poster Session #1
(Poster Session)
Poster session #1. 
🔗 
Sat 8:00 a.m.  8:20 a.m.

Coffee Break

🔗 
Sat 8:20 a.m.  8:40 a.m.

Zhimei Ren: Sensitivity Analysis of Individual Treatment Effects: A Robust Conformal Inference Approach
(Live Talk)
SlidesLive Video » Sensitivity Analysis of Individual Treatment Effects: A Robust Conformal Inference Approach We propose a modelfree framework for sensitivity analysis of individual treatment effects (ITEs), building upon ideas from conformal inference. For any unit, our procedure reports the Γvalue, a number which quantifies the minimum strength of confounding needed to explain away the evidence for ITE. Our approach rests on the reliable predictive inference of counterfactuals and ITEs in situations where the training data is confounded. Under the marginal sensitivity model of Tan (2006), we characterize the shift between the distribution of the observations and that of the counterfactuals. We first develop a general method for predictive inference of test samples from a shifted distribution; we then leverage this to construct covariatedependent prediction sets for counterfactuals. No matter the value of the shift, these prediction sets (resp. approximately) achieve marginal coverage if the propensity score is known exactly (resp. estimated). We describe a distinct procedure also attaining coverage, however, conditional on the training data. In the latter case, we prove a sharpness result showing that for certain classes of prediction problems, the prediction intervals cannot possibly be tightened. We verify the validity and performance of the new methods via simulation studies and apply them to analyze real datasets. This is joint work with Ying Jin and Emmanuel Candès. 
🔗 
Sat 8:40 a.m.  9:00 a.m.

Yao Xie: Conformal prediction intervals and sets for timeseries
(Live Talk)
SlidesLive Video » We develop a general distributionfree framework based on conformal prediction for time series, including prediction intervals for realvalued responses and prediction sets for categorical responses. We show that our intervals and sets asymptotically attain valid conditional and marginal coverages for a broad class of prediction functions and time series. We also show that our interval width or set size converges to the oracle prediction interval or set asymptotically. Moreover, we introduce computationally efficient algorithms called \verbEnbPI for prediction intervals and \verbERAPS for prediction sets, which wrap up around ensemble predictors. Our framework is closely related to conformal prediction (CP) but does not require data exchangeability. Both algorithms avoid datasplitting and are computationally efficient by avoiding retraining, thus being scalable to sequentially producing prediction intervals or sets. We perform extensive simulation and realdata analyses to demonstrate their effectiveness compared with existing methods. This is a joint work with Chen Xu at Georgia Tech. 
🔗 
Sat 9:00 a.m.  10:30 a.m.

Lunch

🔗 
Sat 10:30 a.m.  11:15 a.m.

Panel Discussion
SlidesLive Video » Emmanuel Candès, Victor Chernozhukov, Pietro Perona Moderator: Stephen Bates 
🔗 
Sat 11:15 a.m.  12:15 p.m.

Spotlight Presentations
(Recorded Spotlight Talks)
SlidesLive Video » Probabilistic Conformal Prediction Using Conditional Random Samples by Zhendong Wang, Ruijiang Gao, Mingzhang Yin, Mingyuan Zhou and David Blei On the Utility of Prediction Sets in HumanAI Teams by Varun Babbar, Umang Bhatt and Adrian Weller Adaptive Conformal Predictions for Time Series by Margaux Zaffran, Olivier Féron, Yannig Goude, Julie Josse and Aymeric Dieuleveut Approximate Conditional Coverage via Neural Model Approximations by Allen Schmaltz and Danielle Rasooly Practical Adversarial Multivalid Conformal Prediction by Osbert Bastani, Varun Gupta, Christopher Jung, Georgy Noarov, Ramya Ramalingam and Aaron Roth VaRControl: Bounding the Probability of HighLoss Predictions by Jake Snell, Thomas Zollo and Richard Zemel Confident Adaptive Language Modeling by Tal Schuster and Adam Fisch 
Adrian Weller · Osbert Bastani · Jake Snell · Tal Schuster · Stephen Bates · Zhendong Wang · Margaux Zaffran · Danielle Rasooly · Varun Babbar 🔗 
Sat 12:15 p.m.  1:00 p.m.

Insup Lee: PAC Prediction Sets: Theory and Applications
(Live Talk)
SlidesLive Video » Reliable uncertainty quantification is crucial for applying machine learning in safety critical systems such as in healthcare and autonomous vehicles, since it enables decisionmaking to account for risk. One effective strategy is to construct prediction sets, which modifies a model to output sets of labels rather than individual labels. In this talk, we describe our work on prediction sets that come with probably approximately correct (PAC) guarantees. First, we propose an algorithm for constructing prediction sets that come with PAC guarantees in the i.i.d. setting. Then, we show how our algorithm and its guarantees can be adapted to the covariate shift setting (which is precisely when reliable uncertainty quantification can be most critical). Furthermore, we describe how to adapt our algorithm to the meta learning setting, where a model is adapted to novel tasks with just a handful of examples. Finally, we demonstrate the practical value of PAC prediction sets in a variety of applications, including object classification, detection, and tracking, anomaly detection, natural language entity prediction, detecting oxygen saturation false alarms in pediatric intensive care units, and heart attack prediction. 
🔗 
Sat 1:00 p.m.  1:45 p.m.

Poster Session #2
(Poster Session)

🔗 
Sat 1:45 p.m.  1:55 p.m.

Coffee Break

🔗 
Sat 1:55 p.m.  2:40 p.m.

Rina Barber: Conformal prediction beyond exchangeability
(Live Talk)
SlidesLive Video » Conformal prediction is a popular, modern technique for providing valid predictive inference for arbitrary machine learning models. Its validity relies on the assumptions of exchangeability of the data, and symmetry of the given model fitting algorithm as a function of the data. However, exchangeability is often violated when predictive models are deployed in practice. For example, if the data distribution drifts over time, then the data points are no longer exchangeable; moreover, in such settings, we might want to use an algorithm that treats recent observations as more relevant, which would violate the assumption that data points are treated symmetrically. This paper proposes new methodology to deal with both aspects: we use weighted quantiles to introduce robustness against distribution drift, and design a new technique to allow for algorithms that do not treat data points symmetrically, with theoretical results verifying coverage guarantees that are robust to violations of exchangeability. This work is joint with Emmanuel Candes, Aaditya Ramdas, and Ryan Tibshirani. 
🔗 
Sat 2:40 p.m.  2:45 p.m.

Closing Remarks
(Live Talk)
SlidesLive Video » 
🔗 
Author Information
Anastasios Angelopoulos (UC Berkeley)
Stephen Bates (UC Berkeley)
Yixuan Li (University of WisconsinMadison)
Ryan Tibshirani (Carnegie Mellon University)
Aaditya Ramdas (Carnegie Mellon University)
Stephen Bates (University of California, Berkeley)
More from the Same Authors

2022 : Robust Calibration with Multidomain Temperature Scaling »
Yaodong Yu · Stephen Bates · Yi Ma · Michael Jordan 
2022 : Are Vision Transformers Robust to Spurious Correlations ? »
Soumya Suvra Ghosal · Yifei Ming · Yixuan Li 
2022 : Spotlight Presentations »
Adrian Weller · Osbert Bastani · Jake Snell · Tal Schuster · Stephen Bates · Zhendong Wang · Margaux Zaffran · Danielle Rasooly · Varun Babbar 
2022 : Challenges and Opportunities in Handling Data Distributional Shift »
Yixuan Li 
2022 Poster: OutofDistribution Detection with Deep Nearest Neighbors »
Yiyou Sun · Yifei Ming · Jerry Zhu · Yixuan Li 
2022 Poster: Training OOD Detectors in their Natural Habitats »
Julian KatzSamuels · Julia Nakhleh · Robert Nowak · Yixuan Li 
2022 Poster: Mitigating Neural Network Overconfidence with Logit Normalization »
Hongxin Wei · RENCHUNZI XIE · Hao Cheng · LEI FENG · Bo An · Yixuan Li 
2022 Spotlight: Training OOD Detectors in their Natural Habitats »
Julian KatzSamuels · Julia Nakhleh · Robert Nowak · Yixuan Li 
2022 Spotlight: OutofDistribution Detection with Deep Nearest Neighbors »
Yiyou Sun · Yifei Ming · Jerry Zhu · Yixuan Li 
2022 Spotlight: Mitigating Neural Network Overconfidence with Logit Normalization »
Hongxin Wei · RENCHUNZI XIE · Hao Cheng · LEI FENG · Bo An · Yixuan Li 
2022 Poster: POEM: OutofDistribution Detection with Posterior Sampling »
Yifei Ming · Ying Fan · Yixuan Li 
2022 Poster: ImagetoImage Regression with DistributionFree Uncertainty Quantification and Applications in Imaging »
Anastasios Angelopoulos · Amit Pal Kohli · Stephen Bates · Michael Jordan · Jitendra Malik · Thayer Alshaabi · Srigokul Upadhyayula · Yaniv Romano 
2022 Oral: POEM: OutofDistribution Detection with Posterior Sampling »
Yifei Ming · Ying Fan · Yixuan Li 
2022 Spotlight: ImagetoImage Regression with DistributionFree Uncertainty Quantification and Applications in Imaging »
Anastasios Angelopoulos · Amit Pal Kohli · Stephen Bates · Michael Jordan · Jitendra Malik · Thayer Alshaabi · Srigokul Upadhyayula · Yaniv Romano 
2021 : Poster Session #2 »
Stephen Bates 
2021 : LOOD: Localizationbased Uncertainty Estimation for Medical Imaging (Spotlight #14) »
Yiyou Sun · Yixuan Li 
2021 : Introduction to Conformal Prediction »
Anastasios Angelopoulos 
2021 Workshop: Workshop on DistributionFree Uncertainty Quantification »
Anastasios Angelopoulos · Stephen Bates · Yixuan Li · Aaditya Ramdas · Ryan Tibshirani 
2021 Workshop: Uncertainty and Robustness in Deep Learning »
Balaji Lakshminarayanan · Dan Hendrycks · Yixuan Li · Jasper Snoek · Silvia Chiappa · Sebastian Nowozin · Thomas Dietterich