Skip to yearly menu bar Skip to main content


Workshop

Workshop on Distribution-Free Uncertainty Quantification

Anastasios Angelopoulos · Stephen Bates · Yixuan Li · Aaditya Ramdas · Ryan Tibshirani

Sat 24 Jul, 7:20 a.m. PDT

Visit https://sites.google.com/berkeley.edu/dfuq21/ for details!

While improving prediction accuracy has been the focus of machine learning in recent years, this alone does not suffice for reliable decision-making. Deploying learning systems in consequential settings also requires calibrating and communicating the uncertainty of predictions. A recent line of work we call distribution-free predictive inference (i.e., conformal prediction and related methods) has developed a set of methods that give finite-sample statistical guarantees for any (possibly incorrectly specified) predictive model and any (unknown) underlying distribution of the data, ensuring reliable uncertainty quantification (UQ) for many prediction tasks. This line of work represents a promising new approach to UQ with complex prediction systems but is relatively unknown in the applied machine learning community. Moreover, much remains to be done integrating distribution-free methods with existing approaches to UQ via calibration (e.g. with temperature scaling) -- little work has been done to bridge these two worlds. To facilitate the emerging topics on distribution-free methods, the proposed workshop has two goals. First, to bring together researchers in distribution-free methods with researchers specializing in calibration techniques to catalyze work at this interface. Second, to introduce distribution-free methods to a wider ML audience. Given the important recent emphasis on the reliable real-world performance of ML models, we believe a large fraction of ICML attendees will find this workshop highly relevant.

Chat is not available.
Timezone: America/Los_Angeles

Schedule