Skip to yearly menu bar Skip to main content


Workshop

Workshop on Distribution-Free Uncertainty Quantification

Anastasios Angelopoulos · Stephen Bates · Yixuan Li · Ryan Tibshirani · Aaditya Ramdas · Stephen Bates

Room 308

While improving prediction accuracy has been the focus of machine learning in recent years, this alone does not suffice for reliable decision-making. Deploying learning systems in consequential settings also requires calibrating and communicating the uncertainty of predictions. A recent line of work we call distribution-free predictive inference (i.e., conformal prediction and related methods) has developed a set of methods that give finite-sample statistical guarantees for any (possibly incorrectly specified) predictive model and any (unknown) underlying distribution of the data, ensuring reliable uncertainty quantification (UQ) for many prediction tasks. This line of work represents a promising new approach to UQ with complex prediction systems but is relatively unknown in the applied machine learning community. Moreover, much remains to be done integrating distribution-free methods with existing approaches to modern machine learning in computer vision, natural language, reinforcement learning, and so on -- little work has been done to bridge these two worlds. To facilitate the emerging topics on distribution-free methods, the proposed workshop has two goals. First, to bring together researchers in distribution-free methods with researchers specializing in applications of machine learning to catalyze work at this interface. Second, to bring together the existing community of distribution-free uncertainty quantification research, as no other workshop like this exists at a major conference. Given the important recent emphasis on the reliable real-world performance of ML models, we believe a large fraction of ICML attendees will find this workshop highly relevant.

Chat is not available.
Timezone: America/Los_Angeles

Schedule