Timezone: »
There has been growing interest in ensuring that deep learning systems are robust and reliable. Challenges arise when models receive samples drawn from outside the training distribution. For example, a neural network tasked with classifying handwritten digits may assign high confidence predictions to cat images. Anomalies are frequently encountered when deploying ML models in the real world. Well-calibrated predictive uncertainty estimates are indispensable for many machine learning applications, such as self-driving vehicles and medical diagnosis systems. Generalization to unseen and worst-case inputs is also essential for robustness to distributional shift. In order to have ML models safely deployed in open environments, we must deepen technical understanding in the following areas:
(1) Learning algorithms that can detect changes in data distribution (e.g. out-of-distribution examples) and improve out-of-distribution generalization (e.g. temporal, geographical, hardware, adversarial shifts);
(2) Mechanisms to estimate and calibrate confidence produced by neural networks in typical and unforeseen scenarios;
(3) Guide learning towards an understanding of the underlying causal mechanisms that can guarantee robustness with respect to distribution shift.
In order to achieve these goals, it is critical to dedicate substantial effort on
(4) Creating benchmark datasets and protocols for evaluating model performance under distribution shift
(5) Studying key applications of robust and uncertainty-aware deep learning (e.g., computer vision, robotics, self-driving vehicles, medical imaging), as well as broader machine learning tasks.
This workshop will bring together researchers and practitioners from the machine learning communities to foster future collaborations. Our agenda will feature invited speakers, contributed talks, poster sessions in multiple time-zones and a panel discussion on fundamentally important directions for robust and reliable deep learning.
Fri 6:00 a.m. - 6:15 a.m.
|
Welcome
(
Opening Remarks
)
SlidesLive Video » |
Balaji Lakshminarayanan 🔗 |
Fri 6:15 a.m. - 6:45 a.m.
|
Uncertainty Modeling from 50M to 1B
(
Invited Talk
)
link »
SlidesLive Video » I'll talk about one specific problem I have with the field: scale. Many papers fix an architecture and try to improve log-likelihood, comparing to the original base architecture regardless of how much additional compute is used to outperform the original model. Yet, if we adjust for scale—for example, compare an ensemble of size 10 to a model scaled up 10x—we'd see improvements significantly diminish or vanish altogether. Ultimately, we should be examining the frontier of uncertainty-robustness performance as a function of compute. I'll substantiate this perspective with a few works with colleagues. These works advance the frontier with efficient ensembles alongside priors and inductive biases; and we'll examine uncertainty properties of existing giant models. |
Dustin Tran 🔗 |
Fri 6:45 a.m. - 8:00 a.m.
|
Live Poster session #1 (Europe/Asia friendly)
(
Poster session
)
[ protected link dropped ] [ protected link dropped ] |
🔗 |
Fri 8:00 a.m. - 8:15 a.m.
|
Coffee Break 1
|
🔗 |
Fri 8:15 a.m. - 8:45 a.m.
|
Some Thoughts on Generalization, Robustness, and their application with CLIP
(
Invited Talk
)
link »
SlidesLive Video » OOD generalization is a very difficult problem. Instead of tackling it head on, this talk argues that, when considering the current strengths and weaknesses of deep learning, we should consider an alternative approach which tries to dodge the problem altogether. If we can develop scalable pre-training methods that can leverage large and highly varied data sources, there is a hope that many examples (which would have been OOD for standard ML datasets) will have at least some relevant training data, removing the need for elusive OOD capabilities. |
Alec Radford 🔗 |
Fri 8:45 a.m. - 10:00 a.m.
|
Live Poster session #2 (America friendly)
(
Poster session
)
[ protected link dropped ] [ protected link dropped ] |
🔗 |
Fri 10:00 a.m. - 10:45 a.m.
|
Live Panel Discussion
(
Panel Discussion
)
SlidesLive Video » |
Thomas Dietterich · Chelsea Finn · Kamalika Chaudhuri · Yarin Gal · Uri Shalit 🔗 |
Fri 10:45 a.m. - 11:15 a.m.
|
Lunch Break
|
🔗 |
Fri 11:15 a.m. - 11:25 a.m.
|
Repulsive Deep Ensembles are Bayesian
(
Contributed Talk
)
SlidesLive Video » |
Francesco D'Angelo 🔗 |
Fri 11:25 a.m. - 11:35 a.m.
|
Wide Mean-Field Variational Bayesian Neural Networks Ignore the Data
(
Contributed Talk
)
SlidesLive Video » |
Beau Coker 🔗 |
Fri 11:35 a.m. - 11:45 a.m.
|
Are Bayesian neural networks intrinsically good at out-of-distribution detection?
(
Contributed Talk
)
SlidesLive Video » |
Christian Henning 🔗 |
Fri 11:45 a.m. - 12:15 p.m.
|
Improving Robustness to Distribution Shifts: Methods and Benchmarks
(
Invited Talk
)
link »
SlidesLive Video » Machine learning models deployed in the real world constantly face distribution shifts, yet current models are not robust to these shifts; they can perform well when the train and test distributions are identical, but still have their performance plummet when evaluated on a different test distribution. In this talk, I will discuss methods and benchmarks for improving robustness to distribution shifts. First, we consider the problem of spurious correlations and show how to mitigate it with a combination of distributionally robust optimization (DRO) and controlling model complexity---e.g., through strong L2 regularization, early stopping, or underparameterization. Second, we present WILDS, a curated and diverse collection of 10 datasets with real-world distribution shifts, that aims to address the under-representation of real-world shifts in the datasets widely used in the ML community today. We observe that existing methods fail to mitigate performance drops due to these distribution shifts, underscoring the need for new training methods that produce models which are more robust to the types of distribution shifts that arise in practice. |
Shiori Sagawa 🔗 |
Fri 12:15 p.m. - 12:30 p.m.
|
Coffee Break 2
|
🔗 |
Fri 12:30 p.m. - 1:00 p.m.
|
Evaluating deep learning models with applications to NLP
(
Invited Talk
)
link »
SlidesLive Video » Aggregate evaluations of deep learning models on popular benchmarks have incentivized the creation of bigger models that are more accurate on iid data. As the research community is realizing that these models do not generalize out of distribution, the trend has shifted to evaluations on adversarially constructed, unnatural datasets. However, both these extremities have limitations when it comes to meeting the goals of evaluation. In this talk, I propose that the goal of evaluation is to inform next action to a user in the form of 1) further analysis or 2) model patching. Thinking of evaluation as an iterative process dovetails with these goals. Our work on Robustness Gym (RG) proposes an iterative process of evaluation and explains how that enables a user to iterate on their model development process. I will give two concrete examples in NLP demonstrating how RG supports the aforementioned evaluation goals. Towards the end of the talk, I will discuss some caveats associated with evaluating pre-trained language models (PLMs) and in particular focus on the problem of input contamination, giving examples from our work on SummVis. Using these examples from RG and SummVis, I hope to draw attention to the limitations of current evaluations and the need for a more thorough process that helps us gain a better understanding of our deep learning models. |
Nazneen Rajani 🔗 |
Fri 1:00 p.m. - 1:10 p.m.
|
Calibrated Out-of-Distribution Detection with Conformal P-values
(
Contributed Talk
)
SlidesLive Video » |
Lihua Lei 🔗 |
Fri 1:10 p.m. - 1:20 p.m.
|
Provably Robust Detection of Out-of-distribution Data (almost) for free
(
Contributed Talk
)
SlidesLive Video » |
Alexander Meinke 🔗 |
Fri 1:20 p.m. - 1:30 p.m.
|
Out-of-Distribution Dynamics Detection: RL-Relevant Benchmarks and Results
(
Contributed Talk
)
SlidesLive Video » |
Mohamad H Danesh 🔗 |
Fri 1:30 p.m. - 2:00 p.m.
|
Contrastive Learning for Novelty Detection
(
Invited Talk
)
link »
SlidesLive Video » Novelty detection, i.e., identifying whether a given sample is drawn from outside the training distribution, is essential for reliable machine learning. To this end, there have been many attempts at learning a representation well-suited for novelty detection and designing a score based on such representation. In this talk, I will present a simple, yet effective method named contrasting shifted instances (CSI), inspired by the recent success on contrastive learning of visual representations. Specifically, in addition to contrasting a given sample with other instances as in conventional contrastive learning methods, our training scheme contrasts the sample with distributionally-shifted augmentations of itself. Based on this, we propose a new detection score that is specific to the proposed training scheme. Our experiments demonstrate the superiority of our method under various novelty detection scenarios, including unlabeled one-class, unlabeled multi-class and labeled multi-class settings, with various image benchmark datasets. This is a joint work with Jihoon Tack, Sangwoo Mo and Jongheon Jeong (all from KAIST). |
Jinwoo Shin 🔗 |
-
|
A simple fix to Mahalanobis distance for improving near-OOD detection
(
Workshop Poster
)
|
🔗 |
-
|
Wide Mean-Field Variational Bayesian Neural Networks Ignore the Data
(
Workshop Poster
)
|
🔗 |
-
|
Precise characterization of the prior predictive distribution of deep ReLU networks
(
Workshop Poster
)
|
🔗 |
-
|
Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect
(
Workshop Poster
)
|
🔗 |
-
|
Exploring the Limits of Out-of-Distribution Detection
(
Workshop Poster
)
|
🔗 |
-
|
Repulsive Deep Ensembles are Bayesian
(
Workshop Poster
)
|
🔗 |
-
|
Calibrated Out-of-Distribution Detection with Conformal P-values
(
Workshop Poster
)
|
🔗 |
-
|
Are Bayesian neural networks intrinsically good at out-of-distribution detection?
(
Workshop Poster
)
|
🔗 |
-
|
Provably Robust Detection of Out-of-distribution Data (almost) for free
(
Workshop Poster
)
|
🔗 |
-
|
Out-of-Distribution Dynamics Detection: RL-Relevant Benchmarks and Results
(
Workshop Poster
)
|
🔗 |
-
|
Rethinking Assumptions in Deep Anomaly Detection
(
Workshop Poster
)
|
🔗 |
-
|
Multiple Moment Matching Inference: A Flexible Approximate Inference Algorithm
(
Workshop Poster
)
|
🔗 |
-
|
PAC Prediction Sets Under Covariate Shift
(
Workshop Poster
)
|
🔗 |
-
|
Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations
(
Workshop Poster
)
|
🔗 |
-
|
Do We Really Need to Learn Representations from In-domain Data for Outlier Detection?
(
Workshop Poster
)
|
🔗 |
-
|
DATE: Detecting Anomalies in Text via Self-Supervision of Transformers
(
Workshop Poster
)
|
🔗 |
-
|
Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification
(
Workshop Poster
)
|
🔗 |
-
|
Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification
(
Workshop Poster
)
|
🔗 |
-
|
Implicit Ensemble Training for Efficient and Robust Multiagent Reinforcement Learning
(
Workshop Poster
)
|
🔗 |
-
|
Failures of Uncertainty Estimation on Out-Of-Distribution Samples: Experimental Results from Medical Applications Lead to Theoretical Insights
(
Workshop Poster
)
|
🔗 |
-
|
On Out-of-distribution Detection with Energy-Based Models
(
Workshop Poster
)
|
🔗 |
-
|
Deterministic Neural Networks with Inductive Biases Capture Epistemic and Aleatoric Uncertainty
(
Workshop Poster
)
|
🔗 |
-
|
Transfer and Marginalize: Explaining Away Label Noise with Privileged Information
(
Workshop Poster
)
|
🔗 |
-
|
Meta-Calibration: Meta-Learning of Model Calibration Using Differentiable Expected Calibration Error
(
Workshop Poster
)
|
🔗 |
-
|
Inferring Black Hole Properties from Astronomical Multivariate Time Series with Bayesian Attentive Neural Processes
(
Workshop Poster
)
|
🔗 |
-
|
Towards improving robustness of compressed CNNs
(
Workshop Poster
)
|
🔗 |
-
|
SAND-mask: An Enhanced Gradient Masking Strategy for Invariant Prediction in Domain Generalization
(
Workshop Poster
)
|
🔗 |
-
|
Efficient Gaussian Neural Processes for Regression
(
Workshop Poster
)
|
🔗 |
-
|
Simple, Attack-Agnostic Defense Against Targeted Training Set Attacks Using Cosine Similarity
(
Workshop Poster
)
|
🔗 |
-
|
Safety & Exploration: A Comparative Study of Uses of Uncertainty in Reinforcement Learning
(
Workshop Poster
)
|
🔗 |
-
|
Rethinking Function-Space Variational Inference in Bayesian Neural Networks
(
Workshop Poster
)
|
🔗 |
-
|
Understanding the Under-Coverage Bias in Uncertainty Estimation
(
Workshop Poster
)
|
🔗 |
-
|
BETH Dataset: Real Cybersecurity Data for Anomaly Detection Research
(
Workshop Poster
)
|
🔗 |
-
|
Mean Embeddings with Test-Time Data Augmentation for Ensembling of Representations
(
Workshop Poster
)
|
🔗 |
-
|
Deep Ensemble Uncertainty Fails as Network Width Increases: Why, and How to Fix It
(
Workshop Poster
)
|
🔗 |
-
|
Exact and Efficient Adversarial Robustness with Decomposable Neural Networks
(
Workshop Poster
)
|
🔗 |
-
|
Consistency Regularization for Training Confidence-Calibrated Classifiers
(
Workshop Poster
)
|
🔗 |
-
|
Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates
(
Workshop Poster
)
|
🔗 |
-
|
Quantization of Bayesian neural networks and its effect on quality of uncertainty
(
Workshop Poster
)
|
🔗 |
-
|
Class-Distribution-Aware Calibration for Long-Tailed Visual Recognition
(
Workshop Poster
)
|
🔗 |
-
|
Bayesian Neural Networks with Soft Evidence
(
Workshop Poster
)
|
🔗 |
-
|
Anomaly Detection for Event Data with Temporal Point Processes
(
Workshop Poster
)
|
🔗 |
-
|
Batch Inverse-Variance Weighting: Deep Heteroscedastic Regression
(
Workshop Poster
)
|
🔗 |
-
|
An Empirical Study of Invariant Risk Minimization on Deep Models
(
Workshop Poster
)
|
🔗 |
-
|
A Bayesian Approach to Invariant Deep Neural Networks
(
Workshop Poster
)
|
🔗 |
-
|
Practical posterior Laplace approximation with optimization-driven second moment estimation
(
Workshop Poster
)
|
🔗 |
-
|
Variational Generative Flows for Reconstruction Uncertainty Estimation
(
Workshop Poster
)
|
🔗 |
-
|
Improving the Accuracy-Robustness Trade-Off for Dual-Domain Adversarial Training
(
Workshop Poster
)
|
🔗 |
-
|
Consistency Regularization Can Improve Robustness to Label Noise
(
Workshop Poster
)
|
🔗 |
-
|
Neural Variational Gradient Descent
(
Workshop Poster
)
|
🔗 |
-
|
Evaluating the Use of Reconstruction Error for Novelty Localization
(
Workshop Poster
)
|
🔗 |
-
|
Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization
(
Workshop Poster
)
|
🔗 |
-
|
The Hidden Uncertainty in a Neural Network’s Activations
(
Workshop Poster
)
|
🔗 |
-
|
On the Calibration of Deterministic Epistemic Uncertainty
(
Workshop Poster
)
|
🔗 |
-
|
Objective Robustness in Deep Reinforcement Learning
(
Workshop Poster
)
|
🔗 |
-
|
Epistemic Uncertainty in Learning Chaotic Dynamical Systems
(
Workshop Poster
)
|
🔗 |
-
|
Towards Stochastic Neural Networks via Inductive Wasserstein Embeddings
(
Workshop Poster
)
|
🔗 |
-
|
Distribution-free uncertainty quantification for classification under label shift
(
Workshop Poster
)
|
🔗 |
-
|
How does a Neural Network's Architecture Impact its Robustness to Noisy Labels?
(
Workshop Poster
)
|
🔗 |
-
|
Top-label calibration
(
Workshop Poster
)
|
🔗 |
-
|
Learning to Align the Support of Distributions
(
Workshop Poster
)
|
🔗 |
-
|
Beyond First-Order Uncertainty Estimation with Evidential Models for Open-World Recognition
(
Workshop Poster
)
|
🔗 |
-
|
Revisiting Out-of-Distribution Detection: A Simple Baseline is Surprisingly Effective
(
Workshop Poster
)
|
🔗 |
-
|
Contrastive Predictive Coding for Anomaly Detection and Segmentation
(
Workshop Poster
)
|
🔗 |
-
|
Multi-headed Neural Ensemble Search
(
Workshop Poster
)
|
🔗 |
-
|
A variational approximate posterior for the deep Wishart process
(
Workshop Poster
)
|
🔗 |
-
|
What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel
(
What Are Effective Labels for Augmented Data? Improving Calibrat
)
|
🔗 |
-
|
On Stein Variational Neural Network Ensembles
(
Workshop Poster
)
|
🔗 |
-
|
Uncertainty-Aware Boosted Ensembling in Multi-Modal Settings
(
Workshop Poster
)
|
🔗 |
-
|
RouBL: A computationally cheap way to go beyond mean-field variational inference
(
Workshop Poster
)
|
🔗 |
-
|
No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets
(
Workshop Poster
)
|
🔗 |
-
|
Out-of-Distribution Generalization with Deep Equilibrium Models
(
Workshop Poster
)
|
🔗 |
-
|
Mixture Proportion Estimation and PU Learning: A Modern Approach
(
Workshop Poster
)
|
🔗 |
-
|
On The Dark Side Of Calibration For Modern Neural Networks
(
Workshop Poster
)
|
🔗 |
-
|
Domain Adaptation with Factorizable Joint Shift
(
Workshop Poster
)
|
🔗 |
-
|
Scaling Laws for the Out-of-Distribution Generalization of Image Classifiers
(
Workshop Poster
)
|
🔗 |
-
|
Learning Invariant Weights in Neural Networks
(
Workshop Poster
)
|
🔗 |
-
|
Relational Deep Reinforcement Learning and Latent Goals for Following Instructions in Temporal Logic
(
Workshop Poster
)
|
🔗 |
-
|
On the Effectiveness of Mode Exploration in Bayesian Model Averaging for Neural Networks
(
Workshop Poster
)
|
🔗 |
-
|
Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate
(
Workshop Poster
)
|
🔗 |
-
|
Detecting OODs as datapoints with High Uncertainty
(
Workshop Poster
)
|
🔗 |
-
|
Multi-task Transformation Learning for Robust Out-of-Distribution Detection
(
Workshop Poster
)
|
🔗 |
-
|
Directly Training Joint Energy-Based Models for Conditional Synthesis and Calibrated Prediction of Multi-Attribute Data
(
Workshop Poster
)
|
🔗 |
-
|
Deep Learning with Quantified Uncertainty for Free Electron Laser Scientific Facilities
(
Workshop Poster
)
|
🔗 |
-
|
On the reversed bias-variance tradeoff in deep ensembles
(
Workshop Poster
)
|
🔗 |
-
|
Robust Generalization of Quadratic Neural Networks via Function Identification
(
Workshop Poster
)
|
🔗 |
-
|
Exploring Corruption Robustness: Inductive Biases in Vision Transformers and MLP-Mixers
(
Workshop Poster
)
|
🔗 |
-
|
Deep Random Projection Outlyingness for Unsupervised Anomaly Detection
(
Workshop Poster
)
|
🔗 |
-
|
Deep Deterministic Uncertainty for Semantic Segmentation
(
Workshop Poster
)
|
🔗 |
-
|
Identifying Invariant and Sparse Predictors in High-dimensional Data
(
Workshop Poster
)
|
🔗 |
-
|
On Misclassification-Aware Smoothing for Robustness and Uncertainty Calibration
(
Workshop Poster
)
|
🔗 |
-
|
On Pitfalls in OoD Detection: Entropy Considered Harmful
(
Workshop Poster
)
|
🔗 |
-
|
PnPOOD : Out-Of-Distribution Detection for Text Classification via Plug andPlay Data Augmentation
(
Workshop Poster
)
|
🔗 |
-
|
Augmented Invariant Regularization
(
Workshop Poster
)
|
🔗 |
-
|
Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data
(
Workshop Poster
)
|
🔗 |
-
|
Improved Adversarial Robustness via Uncertainty Targeted Attacks
(
Workshop Poster
)
|
🔗 |
-
|
Notes on the Behavior of MC Dropout
(
Workshop Poster
)
|
🔗 |
-
|
Distribution-free Risk-controlling Prediction Sets
(
Workshop Poster
)
|
🔗 |
-
|
Stochastic Bouncy Particle Sampler for Bayesian Neural Networks
(
Workshop Poster
)
|
🔗 |
-
|
Novelty detection using ensembles with regularized disagreement
(
Workshop Poster
)
|
🔗 |
-
|
A Tale Of Two Long Tails
(
Workshop Poster
)
|
🔗 |
-
|
Defending against Adversarial Patches with Robust Self-Attention
(
Workshop Poster
)
|
🔗 |
-
|
Intrinsic uncertainties and where to find them
(
Workshop Poster
)
|
🔗 |
-
|
Dataset to Dataspace: A Topological-Framework to Improve Analysis of Machine Learning Model Performance
(
Workshop Poster
)
|
🔗 |
-
|
Analyzing And Improving Neural Networks By Generating Semantic Counterexamples Through Differentiable Rendering
(
Workshop Poster
)
|
🔗 |
-
|
Thinkback: Task-Specific Out-of-Distribution Detection
(
Workshop Poster
)
|
🔗 |
-
|
Relating Adversarially Robust Generalization to Flat Minima
(
Workshop Poster
)
|
🔗 |
-
|
Deep Quantile Aggregation
(
Workshop Poster
)
|
🔗 |
-
|
What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel
(
Workshop Poster
)
|
🔗 |
Author Information
Balaji Lakshminarayanan (Google Brain)
Dan Hendrycks (UC Berkeley)
Sharon Li (University of Wisconsin-Madison)
Jasper Snoek (Google Brain)
Silvia Chiappa (DeepMind)
Sebastian Nowozin (Microsoft Research)
Thomas Dietterich (Oregon State University)
More from the Same Authors
-
2021 : A simple fix to Mahalanobis distance for improving near-OOD detection »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Wide Mean-Field Variational Bayesian Neural Networks Ignore the Data »
Stanislav Fort · Jasper Snoek -
2021 : Precise characterization of the prior predictive distribution of deep ReLU networks »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Exploring the Limits of Out-of-Distribution Detection »
Jasper Snoek -
2021 : Repulsive Deep Ensembles are Bayesian »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Calibrated Out-of-Distribution Detection with Conformal P-values »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Are Bayesian neural networks intrinsically good at out-of-distribution detection? »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Provably Robust Detection of Out-of-distribution Data (almost) for free »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Out-of-Distribution Dynamics Detection: RL-Relevant Benchmarks and Results »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Rethinking Assumptions in Deep Anomaly Detection »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Multiple Moment Matching Inference: A Flexible Approximate Inference Algorithm »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : PAC Prediction Sets Under Covariate Shift »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Do We Really Need to Learn Representations from In-domain Data for Outlier Detection? »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : DATE: Detecting Anomalies in Text via Self-Supervision of Transformers »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Implicit Ensemble Training for Efficient and Robust Multiagent Reinforcement Learning »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Failures of Uncertainty Estimation on Out-Of-Distribution Samples: Experimental Results from Medical Applications Lead to Theoretical Insights »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : On Out-of-distribution Detection with Energy-Based Models »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Deterministic Neural Networks with Inductive Biases Capture Epistemic and Aleatoric Uncertainty »
Andreas Kirsch · Balaji Lakshminarayanan · Jasper Snoek -
2021 : Transfer and Marginalize: Explaining Away Label Noise with Privileged Information »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Meta-Calibration: Meta-Learning of Model Calibration Using Differentiable Expected Calibration Error »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Inferring Black Hole Properties from Astronomical Multivariate Time Series with Bayesian Attentive Neural Processes »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Towards improving robustness of compressed CNNs »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : SAND-mask: An Enhanced Gradient Masking Strategy for Invariant Prediction in Domain Generalization »
Soroosh Shahtalebi · Jasper Snoek · Balaji Lakshminarayanan -
2021 : Efficient Gaussian Neural Processes for Regression »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Simple, Attack-Agnostic Defense Against Targeted Training Set Attacks Using Cosine Similarity »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Safety & Exploration: A Comparative Study of Uses of Uncertainty in Reinforcement Learning »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Rethinking Function-Space Variational Inference in Bayesian Neural Networks »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Understanding the Under-Coverage Bias in Uncertainty Estimation »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : BETH Dataset: Real Cybersecurity Data for Anomaly Detection Research »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Mean Embeddings with Test-Time Data Augmentation for Ensembling of Representations »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Deep Ensemble Uncertainty Fails as Network Width Increases: Why, and How to Fix It »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Exact and Efficient Adversarial Robustness with Decomposable Neural Networks »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Consistency Regularization for Training Confidence-Calibrated Classifiers »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Quantization of Bayesian neural networks and its effect on quality of uncertainty »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Class-Distribution-Aware Calibration for Long-Tailed Visual Recognition »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Bayesian Neural Networks with Soft Evidence »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Anomaly Detection for Event Data with Temporal Point Processes »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Batch Inverse-Variance Weighting: Deep Heteroscedastic Regression »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : An Empirical Study of Invariant Risk Minimization on Deep Models »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : A Bayesian Approach to Invariant Deep Neural Networks »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Practical posterior Laplace approximation with optimization-driven second moment estimation »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Variational Generative Flows for Reconstruction Uncertainty Estimation »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Improving the Accuracy-Robustness Trade-Off for Dual-Domain Adversarial Training »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Consistency Regularization Can Improve Robustness to Label Noise »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Neural Variational Gradient Descent »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Evaluating the Use of Reconstruction Error for Novelty Localization »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : The Hidden Uncertainty in a Neural Network’s Activations »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : On the Calibration of Deterministic Epistemic Uncertainty »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Objective Robustness in Deep Reinforcement Learning »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Epistemic Uncertainty in Learning Chaotic Dynamical Systems »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Towards Stochastic Neural Networks via Inductive Wasserstein Embeddings »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Distribution-free uncertainty quantification for classification under label shift »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : How does a Neural Network's Architecture Impact its Robustness to Noisy Labels? »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Top-label calibration »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Learning to Align the Support of Distributions »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Beyond First-Order Uncertainty Estimation with Evidential Models for Open-World Recognition »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Revisiting Out-of-Distribution Detection: A Simple Baseline is Surprisingly Effective »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Contrastive Predictive Coding for Anomaly Detection and Segmentation »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Multi-headed Neural Ensemble Search »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : A variational approximate posterior for the deep Wishart process »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel »
Yao Qin · Jasper Snoek · Balaji Lakshminarayanan -
2021 : On Stein Variational Neural Network Ensembles »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Uncertainty-Aware Boosted Ensembling in Multi-Modal Settings »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : RouBL: A computationally cheap way to go beyond mean-field variational inference »
Sahar Karimi · Balaji Lakshminarayanan · Jasper Snoek -
2021 : No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Out-of-Distribution Generalization with Deep Equilibrium Models »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Mixture Proportion Estimation and PU Learning: A Modern Approach »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : On The Dark Side Of Calibration For Modern Neural Networks »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Domain Adaptation with Factorizable Joint Shift »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Scaling Laws for the Out-of-Distribution Generalization of Image Classifiers »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Learning Invariant Weights in Neural Networks »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Relational Deep Reinforcement Learning and Latent Goals for Following Instructions in Temporal Logic »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : On the Effectiveness of Mode Exploration in Bayesian Model Averaging for Neural Networks »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Detecting OODs as datapoints with High Uncertainty »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Multi-task Transformation Learning for Robust Out-of-Distribution Detection »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Directly Training Joint Energy-Based Models for Conditional Synthesis and Calibrated Prediction of Multi-Attribute Data »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Deep Learning with Quantified Uncertainty for Free Electron Laser Scientific Facilities »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : On the reversed bias-variance tradeoff in deep ensembles »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Robust Generalization of Quadratic Neural Networks via Function Identification »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Exploring Corruption Robustness: Inductive Biases in Vision Transformers and MLP-Mixers »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Deep Random Projection Outlyingness for Unsupervised Anomaly Detection »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Deep Deterministic Uncertainty for Semantic Segmentation »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Identifying Invariant and Sparse Predictors in High-dimensional Data »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : On Misclassification-Aware Smoothing for Robustness and Uncertainty Calibration »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : On Pitfalls in OoD Detection: Entropy Considered Harmful »
Andreas Kirsch · Jasper Snoek · Balaji Lakshminarayanan -
2021 : PnPOOD : Out-Of-Distribution Detection for Text Classification via Plug andPlay Data Augmentation »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Augmented Invariant Regularization »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Improved Adversarial Robustness via Uncertainty Targeted Attacks »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Notes on the Behavior of MC Dropout »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Distribution-free Risk-controlling Prediction Sets »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Stochastic Bouncy Particle Sampler for Bayesian Neural Networks »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Novelty detection using ensembles with regularized disagreement »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : A Tale Of Two Long Tails »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Defending against Adversarial Patches with Robust Self-Attention »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Intrinsic uncertainties and where to find them »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Dataset to Dataspace: A Topological-Framework to Improve Analysis of Machine Learning Model Performance »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Analyzing And Improving Neural Networks By Generating Semantic Counterexamples Through Differentiable Rendering »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Thinkback: Task-Specific Out-of-Distribution Detection »
Jasper Snoek · Balaji Lakshminarayanan -
2021 : Relating Adversarially Robust Generalization to Flat Minima »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : Deep Quantile Aggregation »
Balaji Lakshminarayanan · Jasper Snoek -
2021 : What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel »
Yao Qin · Jasper Snoek -
2022 : Are Vision Transformers Robust to Spurious Correlations ? »
Soumya Suvra Ghosal · Yifei Ming · Sharon Li -
2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · Jie Ren · Joost van Amersfoort · Kehang Han · E. Kelly Buchanan · Kevin Murphy · Mark Collier · Mike Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani -
2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · JIE REN · Joost van Amersfoort · Kehang Han · Estefany Kelly Buchanan · Kevin Murphy · Mark Collier · Michael Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani -
2023 : Building One-class Detector for Anything: Open-vocabulary Zero-shot OOD Detection Using Text-image Models »
Yunhao Ge · Jie Ren · Jiaping Zhao · Kaifeng Chen · Andrew Gallagher · Laurent Itti · Balaji Lakshminarayanan -
2023 : Morse Neural Networks for Uncertainty Quantification »
Benoit Dherin · Huiyi Hu · JIE REN · Michael Dusenberry · Balaji Lakshminarayanan -
2023 Poster: Mitigating Memorization of Noisy Labels by Clipping the Model Prediction »
Hongxin Wei · HUIPING ZHUANG · RENCHUNZI XIE · Lei Feng · Gang Niu · Bo An · Sharon Li -
2023 Poster: Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the Machiavelli Benchmark »
Alexander Pan · Jun Shern Chan · Andy Zou · Nathaniel Li · Steven Basart · Thomas Woodside · Hanlin Zhang · Scott Emmons · Dan Hendrycks -
2023 Poster: A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models »
James Allingham · JIE REN · Michael Dusenberry · Xiuye Gu · Yin Cui · Dustin Tran · Jeremiah Liu · Balaji Lakshminarayanan -
2023 Poster: When and How Does Known Class Help Discover Unknown Ones? Provable Understanding Through Spectral Analysis »
Yiyou Sun · Zhenmei Shi · Yingyiu Liang · Sharon Li -
2023 Oral: Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the Machiavelli Benchmark »
Alexander Pan · Jun Shern Chan · Andy Zou · Nathaniel Li · Steven Basart · Thomas Woodside · Hanlin Zhang · Scott Emmons · Dan Hendrycks -
2023 Poster: Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization and Detection »
Haoyue Bai · Gregory Canal · Xuefeng Du · Jeongyeol Kwon · Robert Nowak · Sharon Li -
2022 : Plex: Towards Reliability using Pretrained Large Model Extensions »
Dustin Tran · Andreas Kirsch · Balaji Lakshminarayanan · Huiyi Hu · Du Phan · D. Sculley · Jasper Snoek · Jeremiah Liu · JIE REN · Joost van Amersfoort · Kehang Han · Estefany Kelly Buchanan · Kevin Murphy · Mark Collier · Michael Dusenberry · Neil Band · Nithum Thain · Rodolphe Jenatton · Tim G. J Rudner · Yarin Gal · Zachary Nado · Zelda Mariet · Zi Wang · Zoubin Ghahramani -
2022 Workshop: Workshop on Distribution-Free Uncertainty Quantification »
Anastasios Angelopoulos · Stephen Bates · Sharon Li · Ryan Tibshirani · Aaditya Ramdas · Stephen Bates -
2022 : Challenges and Opportunities in Handling Data Distributional Shift »
Sharon Li -
2022 Poster: Out-of-Distribution Detection with Deep Nearest Neighbors »
Yiyou Sun · Yifei Ming · Jerry Zhu · Sharon Li -
2022 Poster: Training OOD Detectors in their Natural Habitats »
Julian Katz-Samuels · Julia Nakhleh · Robert Nowak · Sharon Li -
2022 Poster: Mitigating Neural Network Overconfidence with Logit Normalization »
Hongxin Wei · RENCHUNZI XIE · Hao Cheng · LEI FENG · Bo An · Sharon Li -
2022 Poster: Scaling Out-of-Distribution Detection for Real-World Settings »
Dan Hendrycks · Steven Basart · Mantas Mazeika · Andy Zou · joseph kwon · Mohammadreza Mostajabi · Jacob Steinhardt · Dawn Song -
2022 Spotlight: Scaling Out-of-Distribution Detection for Real-World Settings »
Dan Hendrycks · Steven Basart · Mantas Mazeika · Andy Zou · joseph kwon · Mohammadreza Mostajabi · Jacob Steinhardt · Dawn Song -
2022 Spotlight: Training OOD Detectors in their Natural Habitats »
Julian Katz-Samuels · Julia Nakhleh · Robert Nowak · Sharon Li -
2022 Spotlight: Out-of-Distribution Detection with Deep Nearest Neighbors »
Yiyou Sun · Yifei Ming · Jerry Zhu · Sharon Li -
2022 Spotlight: Mitigating Neural Network Overconfidence with Logit Normalization »
Hongxin Wei · RENCHUNZI XIE · Hao Cheng · LEI FENG · Bo An · Sharon Li -
2022 Poster: POEM: Out-of-Distribution Detection with Posterior Sampling »
Yifei Ming · Ying Fan · Sharon Li -
2022 Oral: POEM: Out-of-Distribution Detection with Posterior Sampling »
Yifei Ming · Ying Fan · Sharon Li -
2021 : LOOD: Localization-based Uncertainty Estimation for Medical Imaging (Spotlight #14) »
Yiyou Sun · Sharon Li -
2021 Workshop: Workshop on Distribution-Free Uncertainty Quantification »
Anastasios Angelopoulos · Stephen Bates · Sharon Li · Aaditya Ramdas · Ryan Tibshirani -
2021 Workshop: A Blessing in Disguise: The Prospects and Perils of Adversarial Machine Learning »
Hang Su · Yinpeng Dong · Tianyu Pang · Eric Wong · Zico Kolter · Shuo Feng · Bo Li · Henry Liu · Dan Hendrycks · Francesco Croce · Leslie Rice · Tian Tian -
2021 : Live Panel Discussion »
Thomas Dietterich · Chelsea Finn · Kamalika Chaudhuri · Yarin Gal · Uri Shalit -
2021 : RL Foundation Panel »
Matthew Botvinick · Thomas Dietterich · Leslie Kaelbling · John Langford · Warrren B Powell · Csaba Szepesvari · Lihong Li · Yuxi Li -
2021 : Welcome »
Balaji Lakshminarayanan -
2020 Workshop: Uncertainty and Robustness in Deep Learning Workshop (UDL) »
Sharon Yixuan Li · Balaji Lakshminarayanan · Dan Hendrycks · Thomas Dietterich · Jasper Snoek -
2020 Poster: The k-tied Normal Distribution: A Compact Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks »
Jakub Swiatkowski · Kevin Roth · Bastiaan Veeling · Linh Tran · Joshua V Dillon · Jasper Snoek · Stephan Mandt · Tim Salimans · Rodolphe Jenatton · Sebastian Nowozin -
2020 Poster: Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors »
Mike Dusenberry · Ghassen Jerfel · Yeming Wen · Yian Ma · Jasper Snoek · Katherine Heller · Balaji Lakshminarayanan · Dustin Tran -
2020 Poster: TaskNorm: Rethinking Batch Normalization for Meta-Learning »
John Bronskill · Jonathan Gordon · James Requeima · Sebastian Nowozin · Richard E Turner -
2020 Poster: How Good is the Bayes Posterior in Deep Neural Networks Really? »
Florian Wenzel · Kevin Roth · Bastiaan Veeling · Jakub Swiatkowski · Linh Tran · Stephan Mandt · Jasper Snoek · Tim Salimans · Rodolphe Jenatton · Sebastian Nowozin -
2019 : Panel Discussion (moderator: Tom Dietterich) »
Max Welling · Kilian Weinberger · Terrance Boult · Dawn Song · Thomas Dietterich -
2019 Workshop: Uncertainty and Robustness in Deep Learning »
Sharon Yixuan Li · Dan Hendrycks · Thomas Dietterich · Balaji Lakshminarayanan · Justin Gilmer -
2019 Poster: Learning from Delayed Outcomes via Proxies with Applications to Recommender Systems »
Timothy Mann · Sven Gowal · Andras Gyorgy · Huiyi Hu · Ray Jiang · Balaji Lakshminarayanan · Prav Srinivasan -
2019 Oral: Learning from Delayed Outcomes via Proxies with Applications to Recommender Systems »
Timothy Mann · Sven Gowal · Andras Gyorgy · Huiyi Hu · Ray Jiang · Balaji Lakshminarayanan · Prav Srinivasan -
2019 Oral: Hybrid Models with Deep and Invertible Features »
Eric Nalisnick · Akihiro Matsukawa · Yee-Whye Teh · Dilan Gorur · Balaji Lakshminarayanan -
2019 Poster: EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE »
Chao Ma · Sebastian Tschiatschek · Konstantina Palla · Jose Miguel Hernandez-Lobato · Sebastian Nowozin · Cheng Zhang -
2019 Poster: Hybrid Models with Deep and Invertible Features »
Eric Nalisnick · Akihiro Matsukawa · Yee-Whye Teh · Dilan Gorur · Balaji Lakshminarayanan -
2019 Oral: EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE »
Chao Ma · Sebastian Tschiatschek · Konstantina Palla · Jose Miguel Hernandez-Lobato · Sebastian Nowozin · Cheng Zhang -
2019 Poster: Using Pre-Training Can Improve Model Robustness and Uncertainty »
Dan Hendrycks · Kimin Lee · Mantas Mazeika -
2019 Oral: Using Pre-Training Can Improve Model Robustness and Uncertainty »
Dan Hendrycks · Kimin Lee · Mantas Mazeika -
2018 Poster: Discovering and Removing Exogenous State Variables and Rewards for Reinforcement Learning »
Thomas Dietterich · George Trimponias · Zhitang Chen -
2018 Poster: Open Category Detection with PAC Guarantees »
Si Liu · Risheek Garrepalli · Thomas Dietterich · Alan Fern · Dan Hendrycks -
2018 Oral: Discovering and Removing Exogenous State Variables and Rewards for Reinforcement Learning »
Thomas Dietterich · George Trimponias · Zhitang Chen -
2018 Oral: Open Category Detection with PAC Guarantees »
Si Liu · Risheek Garrepalli · Thomas Dietterich · Alan Fern · Dan Hendrycks -
2018 Poster: Which Training Methods for GANs do actually Converge? »
Lars Mescheder · Andreas Geiger · Sebastian Nowozin -
2018 Oral: Which Training Methods for GANs do actually Converge? »
Lars Mescheder · Andreas Geiger · Sebastian Nowozin -
2017 Workshop: Implicit Generative Models »
Rajesh Ranganath · Ian Goodfellow · Dustin Tran · David Blei · Balaji Lakshminarayanan · Shakir Mohamed -
2017 Poster: Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks »
Lars Mescheder · Sebastian Nowozin · Andreas Geiger -
2017 Talk: Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks »
Lars Mescheder · Sebastian Nowozin · Andreas Geiger