ICML 2021
Skip to yearly menu bar Skip to main content


Uncertainty and Robustness in Deep Learning

Balaji Lakshminarayanan · Dan Hendrycks · Yixuan Li · Jasper Snoek · Silvia Chiappa · Sebastian Nowozin · Thomas Dietterich

Fri 23 Jul, 6 a.m. PDT

There has been growing interest in ensuring that deep learning systems are robust and reliable. Challenges arise when models receive samples drawn from outside the training distribution. For example, a neural network tasked with classifying handwritten digits may assign high confidence predictions to cat images. Anomalies are frequently encountered when deploying ML models in the real world. Well-calibrated predictive uncertainty estimates are indispensable for many machine learning applications, such as self-driving vehicles and medical diagnosis systems. Generalization to unseen and worst-case inputs is also essential for robustness to distributional shift. In order to have ML models safely deployed in open environments, we must deepen technical understanding in the following areas:

(1) Learning algorithms that can detect changes in data distribution (e.g. out-of-distribution examples) and improve out-of-distribution generalization (e.g. temporal, geographical, hardware, adversarial shifts);
(2) Mechanisms to estimate and calibrate confidence produced by neural networks in typical and unforeseen scenarios;
(3) Guide learning towards an understanding of the underlying causal mechanisms that can guarantee robustness with respect to distribution shift.

In order to achieve these goals, it is critical to dedicate substantial effort on
(4) Creating benchmark datasets and protocols for evaluating model performance under distribution shift
(5) Studying key applications of robust and uncertainty-aware deep learning (e.g., computer vision, robotics, self-driving vehicles, medical imaging), as well as broader machine learning tasks.

This workshop will bring together researchers and practitioners from the machine learning communities to foster future collaborations. Our agenda will feature invited speakers, contributed talks, poster sessions in multiple time-zones and a panel discussion on fundamentally important directions for robust and reliable deep learning.

Chat is not available.
Timezone: America/Los_Angeles