Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Fri Jul 17 07:30 AM -- 04:00 PM (PDT)
Uncertainty and Robustness in Deep Learning Workshop (UDL)
Sharon Yixuan Li · Balaji Lakshminarayanan · Dan Hendrycks · Thomas Dietterich · Jasper Snoek





Workshop Home Page

There has been growing interest in rectifying deep neural network instabilities. Challenges arise when models receive samples drawn from outside the training distribution. For example, a neural network tasked with classifying handwritten digits may assign high confidence predictions to cat images. Anomalies are frequently encountered when deploying ML models in the real world. Well-calibrated predictive uncertainty estimates are indispensable for many machine learning applications, such as self-driving vehicles and medical diagnosis systems. Generalization to unseen and worst-case inputs is also essential for robustness to distributional shift. In order to have ML models reliably predict in open environment, we must deepen technical understanding in the emerging areas of: (1) learning algorithms that can detect changes in data distribution (e.g. out-of-distribution examples); (2) mechanisms to estimate and calibrate confidence produced by neural networks in typical and unforeseen scenarios; (3) methods to improve out-of-distribution generalization, including generalization to temporal, geographical, hardware, adversarial, and image-quality changes; (4) benchmark datasets and protocols for evaluating model performance under distribution shift; and (5) key applications of robust and uncertainty-aware deep learning (e.g., computer vision, robotics, self-driving vehicles, medical imaging) as well as broader machine learning tasks.

This workshop will bring together researchers and practitioners from the machine learning communities, and highlight recent work that contributes to addressing these challenges. Our agenda will feature contributed papers with invited speakers. Through the workshop we hope to help identify fundamentally important directions on robust and reliable deep learning, and foster future collaborations.

Opening Remarks (Presentation)
Keynote #1 Matthias Hein (Keynote)
Spotlight Talk 1: Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder (Spotlight)
Spotlight Talk 2: A Closer Look at Accuracy vs. Robustness (Spotlight)
Spotlight Talk 3: Depth Uncertainty in Neural Networks (Spotlight)
Spotlight Talk 4: Few-shot Out-of-Distribution Detection (Spotlight)
Spotlight Talk 5: Detecting Failure Modes in Image Reconstructions with Interval Neural Network Uncertainty (Spotlight)
Spotlight Talk 6: On using Focal Loss for Neural Network Calibration (Spotlight)
Spotlight Talk 7: AutoAttack: reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks (Spotlight)
Spotlight Talk 8: Calibrated Top-1 Uncertainty estimates for classification by score based models (Spotlight)
Poster Session (click to see links) (Poster)
Coffee Break (Break)
Keynote #2 Finale Doshi-Velez (Keynote)
Keynote #3 Percy Liang (Keynote)
Panel Discussion (Panel)
Lunch Break (Break)
Keynote #4 Raquel Urtasun (Keynote)
Contributed Talk 1: Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks (Presentation)
Contributed Talk 2: Improving robustness against common corruptions by covariate shift adaptation (Presentation)
Contributed Talk 3: A Unified View of Label Shift Estimation (Presentation)
Keynote #5 Justin Gilmer (Keynote)
Coffee Break (Break)
Contributed Talk 4: A Benchmark of Medical Out of Distribution Detection (Presentation)
Contributed Talk 5: Neural Ensemble Search for Performant and Calibrated Predictions (Presentation)
Contributed Talk 6: Bayesian model averaging is suboptimal for generalization under model misspecification (Presentation)