Workshop
Uncertainty and Robustness in Deep Learning
Balaji Lakshminarayanan · Dan Hendrycks · Yixuan Li · Jasper Snoek · Silvia Chiappa · Sebastian Nowozin · Thomas Dietterich
Fri 23 Jul, 6 a.m. PDT
There has been growing interest in ensuring that deep learning systems are robust and reliable. Challenges arise when models receive samples drawn from outside the training distribution. For example, a neural network tasked with classifying handwritten digits may assign high confidence predictions to cat images. Anomalies are frequently encountered when deploying ML models in the real world. Well-calibrated predictive uncertainty estimates are indispensable for many machine learning applications, such as self-driving vehicles and medical diagnosis systems. Generalization to unseen and worst-case inputs is also essential for robustness to distributional shift. In order to have ML models safely deployed in open environments, we must deepen technical understanding in the following areas:
(1) Learning algorithms that can detect changes in data distribution (e.g. out-of-distribution examples) and improve out-of-distribution generalization (e.g. temporal, geographical, hardware, adversarial shifts);
(2) Mechanisms to estimate and calibrate confidence produced by neural networks in typical and unforeseen scenarios;
(3) Guide learning towards an understanding of the underlying causal mechanisms that can guarantee robustness with respect to distribution shift.
In order to achieve these goals, it is critical to dedicate substantial effort on
(4) Creating benchmark datasets and protocols for evaluating model performance under distribution shift
(5) Studying key applications of robust and uncertainty-aware deep learning (e.g., computer vision, robotics, self-driving vehicles, medical imaging), as well as broader machine learning tasks.
This workshop will bring together researchers and practitioners from the machine learning communities to foster future collaborations. Our agenda will feature invited speakers, contributed talks, poster sessions in multiple time-zones and a panel discussion on fundamentally important directions for robust and reliable deep learning.
Schedule
Fri 6:00 a.m. - 6:15 a.m.
|
Welcome
(
Opening Remarks
)
>
SlidesLive Video |
Balaji Lakshminarayanan 🔗 |
Fri 6:15 a.m. - 6:45 a.m.
|
Uncertainty Modeling from 50M to 1B
(
Invited Talk
)
>
link
SlidesLive Video |
Dustin Tran 🔗 |
Fri 6:45 a.m. - 8:00 a.m.
|
Live Poster session #1 (Europe/Asia friendly)
(
Poster session
)
>
|
🔗 |
Fri 8:00 a.m. - 8:15 a.m.
|
Coffee Break 1
|
🔗 |
Fri 8:15 a.m. - 8:45 a.m.
|
Some Thoughts on Generalization, Robustness, and their application with CLIP
(
Invited Talk
)
>
link
SlidesLive Video |
Alec Radford 🔗 |
Fri 8:45 a.m. - 10:00 a.m.
|
Live Poster session #2 (America friendly)
(
Poster session
)
>
|
🔗 |
Fri 10:00 a.m. - 10:45 a.m.
|
Live Panel Discussion
(
Panel Discussion
)
>
SlidesLive Video |
Thomas Dietterich · Chelsea Finn · Kamalika Chaudhuri · Yarin Gal · Uri Shalit 🔗 |
Fri 10:45 a.m. - 11:15 a.m.
|
Lunch Break
|
🔗 |
Fri 11:15 a.m. - 11:25 a.m.
|
Repulsive Deep Ensembles are Bayesian
(
Contributed Talk
)
>
SlidesLive Video |
Francesco D'Angelo 🔗 |
Fri 11:25 a.m. - 11:35 a.m.
|
Wide Mean-Field Variational Bayesian Neural Networks Ignore the Data
(
Contributed Talk
)
>
SlidesLive Video |
Beau Coker 🔗 |
Fri 11:35 a.m. - 11:45 a.m.
|
Are Bayesian neural networks intrinsically good at out-of-distribution detection?
(
Contributed Talk
)
>
SlidesLive Video |
Christian Henning 🔗 |
Fri 11:45 a.m. - 12:15 p.m.
|
Improving Robustness to Distribution Shifts: Methods and Benchmarks
(
Invited Talk
)
>
link
SlidesLive Video |
Shiori Sagawa 🔗 |
Fri 12:15 p.m. - 12:30 p.m.
|
Coffee Break 2
|
🔗 |
Fri 12:30 p.m. - 1:00 p.m.
|
Evaluating deep learning models with applications to NLP
(
Invited Talk
)
>
link
SlidesLive Video |
Nazneen Rajani 🔗 |
Fri 1:00 p.m. - 1:10 p.m.
|
Calibrated Out-of-Distribution Detection with Conformal P-values
(
Contributed Talk
)
>
SlidesLive Video |
Lihua Lei 🔗 |
Fri 1:10 p.m. - 1:20 p.m.
|
Provably Robust Detection of Out-of-distribution Data (almost) for free
(
Contributed Talk
)
>
SlidesLive Video |
Alexander Meinke 🔗 |
Fri 1:20 p.m. - 1:30 p.m.
|
Out-of-Distribution Dynamics Detection: RL-Relevant Benchmarks and Results
(
Contributed Talk
)
>
SlidesLive Video |
Mohamad H Danesh 🔗 |
Fri 1:30 p.m. - 2:00 p.m.
|
Contrastive Learning for Novelty Detection
(
Invited Talk
)
>
link
SlidesLive Video |
Jinwoo Shin 🔗 |
-
|
A simple fix to Mahalanobis distance for improving near-OOD detection
(
Workshop Poster
)
>
|
🔗 |
-
|
Wide Mean-Field Variational Bayesian Neural Networks Ignore the Data
(
Workshop Poster
)
>
|
🔗 |
-
|
Precise characterization of the prior predictive distribution of deep ReLU networks
(
Workshop Poster
)
>
|
🔗 |
-
|
Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect
(
Workshop Poster
)
>
|
🔗 |
-
|
Exploring the Limits of Out-of-Distribution Detection
(
Workshop Poster
)
>
|
🔗 |
-
|
Repulsive Deep Ensembles are Bayesian
(
Workshop Poster
)
>
|
🔗 |
-
|
Calibrated Out-of-Distribution Detection with Conformal P-values
(
Workshop Poster
)
>
|
🔗 |
-
|
Are Bayesian neural networks intrinsically good at out-of-distribution detection?
(
Workshop Poster
)
>
|
🔗 |
-
|
Provably Robust Detection of Out-of-distribution Data (almost) for free
(
Workshop Poster
)
>
|
🔗 |
-
|
Out-of-Distribution Dynamics Detection: RL-Relevant Benchmarks and Results
(
Workshop Poster
)
>
|
🔗 |
-
|
Rethinking Assumptions in Deep Anomaly Detection
(
Workshop Poster
)
>
|
🔗 |
-
|
Multiple Moment Matching Inference: A Flexible Approximate Inference Algorithm
(
Workshop Poster
)
>
|
🔗 |
-
|
PAC Prediction Sets Under Covariate Shift
(
Workshop Poster
)
>
|
🔗 |
-
|
Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations
(
Workshop Poster
)
>
|
🔗 |
-
|
Do We Really Need to Learn Representations from In-domain Data for Outlier Detection?
(
Workshop Poster
)
>
|
🔗 |
-
|
DATE: Detecting Anomalies in Text via Self-Supervision of Transformers
(
Workshop Poster
)
>
|
🔗 |
-
|
Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification
(
Workshop Poster
)
>
|
🔗 |
-
|
Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification
(
Workshop Poster
)
>
|
🔗 |
-
|
Implicit Ensemble Training for Efficient and Robust Multiagent Reinforcement Learning
(
Workshop Poster
)
>
|
🔗 |
-
|
Failures of Uncertainty Estimation on Out-Of-Distribution Samples: Experimental Results from Medical Applications Lead to Theoretical Insights
(
Workshop Poster
)
>
|
🔗 |
-
|
On Out-of-distribution Detection with Energy-Based Models
(
Workshop Poster
)
>
|
🔗 |
-
|
Deterministic Neural Networks with Inductive Biases Capture Epistemic and Aleatoric Uncertainty
(
Workshop Poster
)
>
|
🔗 |
-
|
Transfer and Marginalize: Explaining Away Label Noise with Privileged Information
(
Workshop Poster
)
>
|
🔗 |
-
|
Meta-Calibration: Meta-Learning of Model Calibration Using Differentiable Expected Calibration Error
(
Workshop Poster
)
>
|
🔗 |
-
|
Inferring Black Hole Properties from Astronomical Multivariate Time Series with Bayesian Attentive Neural Processes
(
Workshop Poster
)
>
|
🔗 |
-
|
Towards improving robustness of compressed CNNs
(
Workshop Poster
)
>
|
🔗 |
-
|
SAND-mask: An Enhanced Gradient Masking Strategy for Invariant Prediction in Domain Generalization
(
Workshop Poster
)
>
|
🔗 |
-
|
Efficient Gaussian Neural Processes for Regression
(
Workshop Poster
)
>
|
🔗 |
-
|
Simple, Attack-Agnostic Defense Against Targeted Training Set Attacks Using Cosine Similarity
(
Workshop Poster
)
>
|
🔗 |
-
|
Safety & Exploration: A Comparative Study of Uses of Uncertainty in Reinforcement Learning
(
Workshop Poster
)
>
|
🔗 |
-
|
Rethinking Function-Space Variational Inference in Bayesian Neural Networks
(
Workshop Poster
)
>
|
🔗 |
-
|
Understanding the Under-Coverage Bias in Uncertainty Estimation
(
Workshop Poster
)
>
|
🔗 |
-
|
BETH Dataset: Real Cybersecurity Data for Anomaly Detection Research
(
Workshop Poster
)
>
|
🔗 |
-
|
Mean Embeddings with Test-Time Data Augmentation for Ensembling of Representations
(
Workshop Poster
)
>
|
🔗 |
-
|
Deep Ensemble Uncertainty Fails as Network Width Increases: Why, and How to Fix It
(
Workshop Poster
)
>
|
🔗 |
-
|
Exact and Efficient Adversarial Robustness with Decomposable Neural Networks
(
Workshop Poster
)
>
|
🔗 |
-
|
Consistency Regularization for Training Confidence-Calibrated Classifiers
(
Workshop Poster
)
>
|
🔗 |
-
|
Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates
(
Workshop Poster
)
>
|
🔗 |
-
|
Quantization of Bayesian neural networks and its effect on quality of uncertainty
(
Workshop Poster
)
>
|
🔗 |
-
|
Class-Distribution-Aware Calibration for Long-Tailed Visual Recognition
(
Workshop Poster
)
>
|
🔗 |
-
|
Bayesian Neural Networks with Soft Evidence
(
Workshop Poster
)
>
|
🔗 |
-
|
Anomaly Detection for Event Data with Temporal Point Processes
(
Workshop Poster
)
>
|
🔗 |
-
|
Batch Inverse-Variance Weighting: Deep Heteroscedastic Regression
(
Workshop Poster
)
>
|
🔗 |
-
|
An Empirical Study of Invariant Risk Minimization on Deep Models
(
Workshop Poster
)
>
|
🔗 |
-
|
A Bayesian Approach to Invariant Deep Neural Networks
(
Workshop Poster
)
>
|
🔗 |
-
|
Practical posterior Laplace approximation with optimization-driven second moment estimation
(
Workshop Poster
)
>
|
🔗 |
-
|
Variational Generative Flows for Reconstruction Uncertainty Estimation
(
Workshop Poster
)
>
|
🔗 |
-
|
Improving the Accuracy-Robustness Trade-Off for Dual-Domain Adversarial Training
(
Workshop Poster
)
>
|
🔗 |
-
|
Consistency Regularization Can Improve Robustness to Label Noise
(
Workshop Poster
)
>
|
🔗 |
-
|
Neural Variational Gradient Descent
(
Workshop Poster
)
>
|
🔗 |
-
|
Evaluating the Use of Reconstruction Error for Novelty Localization
(
Workshop Poster
)
>
|
🔗 |
-
|
Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization
(
Workshop Poster
)
>
|
🔗 |
-
|
The Hidden Uncertainty in a Neural Network’s Activations
(
Workshop Poster
)
>
|
🔗 |
-
|
On the Calibration of Deterministic Epistemic Uncertainty
(
Workshop Poster
)
>
|
🔗 |
-
|
Objective Robustness in Deep Reinforcement Learning
(
Workshop Poster
)
>
|
🔗 |
-
|
Epistemic Uncertainty in Learning Chaotic Dynamical Systems
(
Workshop Poster
)
>
|
🔗 |
-
|
Towards Stochastic Neural Networks via Inductive Wasserstein Embeddings
(
Workshop Poster
)
>
|
🔗 |
-
|
Distribution-free uncertainty quantification for classification under label shift
(
Workshop Poster
)
>
|
🔗 |
-
|
How does a Neural Network's Architecture Impact its Robustness to Noisy Labels?
(
Workshop Poster
)
>
|
🔗 |
-
|
Top-label calibration
(
Workshop Poster
)
>
|
🔗 |
-
|
Learning to Align the Support of Distributions
(
Workshop Poster
)
>
|
🔗 |
-
|
Beyond First-Order Uncertainty Estimation with Evidential Models for Open-World Recognition
(
Workshop Poster
)
>
|
🔗 |
-
|
Revisiting Out-of-Distribution Detection: A Simple Baseline is Surprisingly Effective
(
Workshop Poster
)
>
|
🔗 |
-
|
Contrastive Predictive Coding for Anomaly Detection and Segmentation
(
Workshop Poster
)
>
|
🔗 |
-
|
Multi-headed Neural Ensemble Search
(
Workshop Poster
)
>
|
🔗 |
-
|
A variational approximate posterior for the deep Wishart process
(
Workshop Poster
)
>
|
🔗 |
-
|
What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel
(
What Are Effective Labels for Augmented Data? Improving Calibrat
)
>
|
🔗 |
-
|
On Stein Variational Neural Network Ensembles
(
Workshop Poster
)
>
|
🔗 |
-
|
Uncertainty-Aware Boosted Ensembling in Multi-Modal Settings
(
Workshop Poster
)
>
|
🔗 |
-
|
RouBL: A computationally cheap way to go beyond mean-field variational inference
(
Workshop Poster
)
>
|
🔗 |
-
|
No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
(
Workshop Poster
)
>
|
🔗 |
-
|
Out-of-Distribution Generalization with Deep Equilibrium Models
(
Workshop Poster
)
>
|
🔗 |
-
|
Mixture Proportion Estimation and PU Learning: A Modern Approach
(
Workshop Poster
)
>
|
🔗 |
-
|
On The Dark Side Of Calibration For Modern Neural Networks
(
Workshop Poster
)
>
|
🔗 |
-
|
Domain Adaptation with Factorizable Joint Shift
(
Workshop Poster
)
>
|
🔗 |
-
|
Scaling Laws for the Out-of-Distribution Generalization of Image Classifiers
(
Workshop Poster
)
>
|
🔗 |
-
|
Learning Invariant Weights in Neural Networks
(
Workshop Poster
)
>
|
🔗 |
-
|
Relational Deep Reinforcement Learning and Latent Goals for Following Instructions in Temporal Logic
(
Workshop Poster
)
>
|
🔗 |
-
|
On the Effectiveness of Mode Exploration in Bayesian Model Averaging for Neural Networks
(
Workshop Poster
)
>
|
🔗 |
-
|
Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate
(
Workshop Poster
)
>
|
🔗 |
-
|
Detecting OODs as datapoints with High Uncertainty
(
Workshop Poster
)
>
|
🔗 |
-
|
Multi-task Transformation Learning for Robust Out-of-Distribution Detection
(
Workshop Poster
)
>
|
🔗 |
-
|
Directly Training Joint Energy-Based Models for Conditional Synthesis and Calibrated Prediction of Multi-Attribute Data
(
Workshop Poster
)
>
|
🔗 |
-
|
Deep Learning with Quantified Uncertainty for Free Electron Laser Scientific Facilities
(
Workshop Poster
)
>
|
🔗 |
-
|
On the reversed bias-variance tradeoff in deep ensembles
(
Workshop Poster
)
>
|
🔗 |
-
|
Robust Generalization of Quadratic Neural Networks via Function Identification
(
Workshop Poster
)
>
|
🔗 |
-
|
Exploring Corruption Robustness: Inductive Biases in Vision Transformers and MLP-Mixers
(
Workshop Poster
)
>
|
🔗 |
-
|
Deep Random Projection Outlyingness for Unsupervised Anomaly Detection
(
Workshop Poster
)
>
|
🔗 |
-
|
Deep Deterministic Uncertainty for Semantic Segmentation
(
Workshop Poster
)
>
|
🔗 |
-
|
Identifying Invariant and Sparse Predictors in High-dimensional Data
(
Workshop Poster
)
>
|
🔗 |
-
|
On Misclassification-Aware Smoothing for Robustness and Uncertainty Calibration
(
Workshop Poster
)
>
|
🔗 |
-
|
On Pitfalls in OoD Detection: Entropy Considered Harmful
(
Workshop Poster
)
>
|
🔗 |
-
|
PnPOOD : Out-Of-Distribution Detection for Text Classification via Plug andPlay Data Augmentation
(
Workshop Poster
)
>
|
🔗 |
-
|
Augmented Invariant Regularization
(
Workshop Poster
)
>
|
🔗 |
-
|
Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data
(
Workshop Poster
)
>
|
🔗 |
-
|
Improved Adversarial Robustness via Uncertainty Targeted Attacks
(
Workshop Poster
)
>
|
🔗 |
-
|
Notes on the Behavior of MC Dropout
(
Workshop Poster
)
>
|
🔗 |
-
|
Distribution-free Risk-controlling Prediction Sets
(
Workshop Poster
)
>
|
🔗 |
-
|
Stochastic Bouncy Particle Sampler for Bayesian Neural Networks
(
Workshop Poster
)
>
|
🔗 |
-
|
Novelty detection using ensembles with regularized disagreement
(
Workshop Poster
)
>
|
🔗 |
-
|
A Tale Of Two Long Tails
(
Workshop Poster
)
>
|
🔗 |
-
|
Defending against Adversarial Patches with Robust Self-Attention
(
Workshop Poster
)
>
|
🔗 |
-
|
Intrinsic uncertainties and where to find them
(
Workshop Poster
)
>
|
🔗 |
-
|
Dataset to Dataspace: A Topological-Framework to Improve Analysis of Machine Learning Model Performance
(
Workshop Poster
)
>
|
🔗 |
-
|
Analyzing And Improving Neural Networks By Generating Semantic Counterexamples Through Differentiable Rendering
(
Workshop Poster
)
>
|
🔗 |
-
|
Thinkback: Task-Specific Out-of-Distribution Detection
(
Workshop Poster
)
>
|
🔗 |
-
|
Relating Adversarially Robust Generalization to Flat Minima
(
Workshop Poster
)
>
|
🔗 |
-
|
Deep Quantile Aggregation
(
Workshop Poster
)
>
|
🔗 |
-
|
What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel
(
Workshop Poster
)
>
|
🔗 |