Timezone: »

 
Workshop
Uncertainty and Robustness in Deep Learning
Balaji Lakshminarayanan · Dan Hendrycks · Sharon Li · Jasper Snoek · Silvia Chiappa · Sebastian Nowozin · Thomas Dietterich

Fri Jul 23 06:00 AM -- 02:00 PM (PDT) @ None
Event URL: https://sites.google.com/corp/view/udlworkshop2021/home »

There has been growing interest in ensuring that deep learning systems are robust and reliable. Challenges arise when models receive samples drawn from outside the training distribution. For example, a neural network tasked with classifying handwritten digits may assign high confidence predictions to cat images. Anomalies are frequently encountered when deploying ML models in the real world. Well-calibrated predictive uncertainty estimates are indispensable for many machine learning applications, such as self-driving vehicles and medical diagnosis systems. Generalization to unseen and worst-case inputs is also essential for robustness to distributional shift. In order to have ML models safely deployed in open environments, we must deepen technical understanding in the following areas:

(1) Learning algorithms that can detect changes in data distribution (e.g. out-of-distribution examples) and improve out-of-distribution generalization (e.g. temporal, geographical, hardware, adversarial shifts);
(2) Mechanisms to estimate and calibrate confidence produced by neural networks in typical and unforeseen scenarios;
(3) Guide learning towards an understanding of the underlying causal mechanisms that can guarantee robustness with respect to distribution shift.

In order to achieve these goals, it is critical to dedicate substantial effort on
(4) Creating benchmark datasets and protocols for evaluating model performance under distribution shift
(5) Studying key applications of robust and uncertainty-aware deep learning (e.g., computer vision, robotics, self-driving vehicles, medical imaging), as well as broader machine learning tasks.

This workshop will bring together researchers and practitioners from the machine learning communities to foster future collaborations. Our agenda will feature invited speakers, contributed talks, poster sessions in multiple time-zones and a panel discussion on fundamentally important directions for robust and reliable deep learning.

Fri 6:00 a.m. - 6:15 a.m.
Welcome (Opening Remarks)   
Balaji Lakshminarayanan
Fri 6:15 a.m. - 6:45 a.m.
 link »   

I'll talk about one specific problem I have with the field: scale. Many papers fix an architecture and try to improve log-likelihood, comparing to the original base architecture regardless of how much additional compute is used to outperform the original model. Yet, if we adjust for scale—for example, compare an ensemble of size 10 to a model scaled up 10x—we'd see improvements significantly diminish or vanish altogether. Ultimately, we should be examining the frontier of uncertainty-robustness performance as a function of compute. I'll substantiate this perspective with a few works with colleagues. These works advance the frontier with efficient ensembles alongside priors and inductive biases; and we'll examine uncertainty properties of existing giant models.

Dustin Tran
Fri 6:45 a.m. - 8:00 a.m.

https://eventhosts.gather.town/dl7kfsNh69JsAlk6/udl-poster-room-1 https://eventhosts.gather.town/tLEXMjFk7rFVClBm/udl-poster-room-2

Fri 8:00 a.m. - 8:15 a.m.
Coffee Break 1 (Break)
Fri 8:15 a.m. - 8:45 a.m.
 link »   

OOD generalization is a very difficult problem. Instead of tackling it head on, this talk argues that, when considering the current strengths and weaknesses of deep learning, we should consider an alternative approach which tries to dodge the problem altogether. If we can develop scalable pre-training methods that can leverage large and highly varied data sources, there is a hope that many examples (which would have been OOD for standard ML datasets) will have at least some relevant training data, removing the need for elusive OOD capabilities.

Alec Radford
Fri 8:45 a.m. - 10:00 a.m.

https://eventhosts.gather.town/nrNOoqppCNoV1Q0l/udl-poster-room-3 https://eventhosts.gather.town/lAs6g8bdXxfiZgrB/udl-poster-room-4

Fri 10:00 a.m. - 10:45 a.m.
Live Panel Discussion (Panel Discussion)   
Thomas Dietterich, Chelsea Finn, Kamalika Chaudhuri, Yarin Gal, Uri Shalit
Fri 10:45 a.m. - 11:15 a.m.
Lunch Break (Break)
Fri 11:15 a.m. - 11:25 a.m.
Repulsive Deep Ensembles are Bayesian (Contributed Talk)   
Francesco D'Angelo
Fri 11:25 a.m. - 11:35 a.m.
Wide Mean-Field Variational Bayesian Neural Networks Ignore the Data (Contributed Talk)   
Beau Coker
Fri 11:35 a.m. - 11:45 a.m.
Are Bayesian neural networks intrinsically good at out-of-distribution detection? (Contributed Talk)   
Christian Henning
Fri 11:45 a.m. - 12:15 p.m.
 link »   

Machine learning models deployed in the real world constantly face distribution shifts, yet current models are not robust to these shifts; they can perform well when the train and test distributions are identical, but still have their performance plummet when evaluated on a different test distribution. In this talk, I will discuss methods and benchmarks for improving robustness to distribution shifts. First, we consider the problem of spurious correlations and show how to mitigate it with a combination of distributionally robust optimization (DRO) and controlling model complexity---e.g., through strong L2 regularization, early stopping, or underparameterization. Second, we present WILDS, a curated and diverse collection of 10 datasets with real-world distribution shifts, that aims to address the under-representation of real-world shifts in the datasets widely used in the ML community today. We observe that existing methods fail to mitigate performance drops due to these distribution shifts, underscoring the need for new training methods that produce models which are more robust to the types of distribution shifts that arise in practice.

Shiori Sagawa
Fri 12:15 p.m. - 12:30 p.m.
Coffee Break 2 (Break)
Fri 12:30 p.m. - 1:00 p.m.
 link »   

Aggregate evaluations of deep learning models on popular benchmarks have incentivized the creation of bigger models that are more accurate on iid data. As the research community is realizing that these models do not generalize out of distribution, the trend has shifted to evaluations on adversarially constructed, unnatural datasets. However, both these extremities have limitations when it comes to meeting the goals of evaluation. In this talk, I propose that the goal of evaluation is to inform next action to a user in the form of 1) further analysis or 2) model patching. Thinking of evaluation as an iterative process dovetails with these goals. Our work on Robustness Gym (RG) proposes an iterative process of evaluation and explains how that enables a user to iterate on their model development process. I will give two concrete examples in NLP demonstrating how RG supports the aforementioned evaluation goals. Towards the end of the talk, I will discuss some caveats associated with evaluating pre-trained language models (PLMs) and in particular focus on the problem of input contamination, giving examples from our work on SummVis. Using these examples from RG and SummVis, I hope to draw attention to the limitations of current evaluations and the need for a more thorough process that helps us gain a better understanding of our deep learning models.

Nazneen Rajani
Fri 1:00 p.m. - 1:10 p.m.
Calibrated Out-of-Distribution Detection with Conformal P-values (Contributed Talk)   
Lihua Lei
Fri 1:10 p.m. - 1:20 p.m.
Provably Robust Detection of Out-of-distribution Data (almost) for free (Contributed Talk)   
Alex Meinke
Fri 1:20 p.m. - 1:30 p.m.
Out-of-Distribution Dynamics Detection: RL-Relevant Benchmarks and Results (Contributed Talk)   
Mohamad H Danesh
Fri 1:30 p.m. - 2:00 p.m.
 link »   

Novelty detection, i.e., identifying whether a given sample is drawn from outside the training distribution, is essential for reliable machine learning. To this end, there have been many attempts at learning a representation well-suited for novelty detection and designing a score based on such representation. In this talk, I will present a simple, yet effective method named contrasting shifted instances (CSI), inspired by the recent success on contrastive learning of visual representations. Specifically, in addition to contrasting a given sample with other instances as in conventional contrastive learning methods, our training scheme contrasts the sample with distributionally-shifted augmentations of itself. Based on this, we propose a new detection score that is specific to the proposed training scheme. Our experiments demonstrate the superiority of our method under various novelty detection scenarios, including unlabeled one-class, unlabeled multi-class and labeled multi-class settings, with various image benchmark datasets. This is a joint work with Jihoon Tack, Sangwoo Mo and Jongheon Jeong (all from KAIST).

Jinwoo Shin
-
A simple fix to Mahalanobis distance for improving near-OOD detection (Workshop Poster) [ Visit Poster at Spot A2 in Virtual World ]
-
Wide Mean-Field Variational Bayesian Neural Networks Ignore the Data (Workshop Poster) [ Visit Poster at Spot A1 in Virtual World ]
-
Precise characterization of the prior predictive distribution of deep ReLU networks (Workshop Poster) [ Visit Poster at Spot A3 in Virtual World ]
-
Disentangling the Roles of Curation, Data-Augmentation and the Prior in the Cold Posterior Effect (Workshop Poster) [ Visit Poster at Spot A4 in Virtual World ]
-
Exploring the Limits of Out-of-Distribution Detection (Workshop Poster) [ Visit Poster at Spot A0 in Virtual World ]
-
Repulsive Deep Ensembles are Bayesian (Workshop Poster) [ Visit Poster at Spot A2 in Virtual World ]
-
Calibrated Out-of-Distribution Detection with Conformal P-values (Workshop Poster) [ Visit Poster at Spot A1 in Virtual World ]
-
Are Bayesian neural networks intrinsically good at out-of-distribution detection? (Workshop Poster) [ Visit Poster at Spot A5 in Virtual World ]
-
Provably Robust Detection of Out-of-distribution Data (almost) for free (Workshop Poster) [ Visit Poster at Spot A3 in Virtual World ]
-
Out-of-Distribution Dynamics Detection: RL-Relevant Benchmarks and Results (Workshop Poster) [ Visit Poster at Spot A4 in Virtual World ]
-
Rethinking Assumptions in Deep Anomaly Detection (Workshop Poster) [ Visit Poster at Spot A5 in Virtual World ]
-
Multiple Moment Matching Inference: A Flexible Approximate Inference Algorithm (Workshop Poster) [ Visit Poster at Spot A1 in Virtual World ]
-
PAC Prediction Sets Under Covariate Shift (Workshop Poster) [ Visit Poster at Spot A0 in Virtual World ]
-
Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations (Workshop Poster) [ Visit Poster at Spot A2 in Virtual World ]
-
Do We Really Need to Learn Representations from In-domain Data for Outlier Detection? (Workshop Poster) [ Visit Poster at Spot A0 in Virtual World ]
-
DATE: Detecting Anomalies in Text via Self-Supervision of Transformers (Workshop Poster) [ Visit Poster at Spot A1 in Virtual World ]
-
Beyond Pinball Loss: Quantile Methods for Calibrated Uncertainty Quantification (Workshop Poster) [ Visit Poster at Spot A3 in Virtual World ]
-
Uncertainty Toolbox: an Open-Source Library for Assessing, Visualizing, and Improving Uncertainty Quantification (Workshop Poster) [ Visit Poster at Spot A4 in Virtual World ]
-
Implicit Ensemble Training for Efficient and Robust Multiagent Reinforcement Learning (Workshop Poster) [ Visit Poster at Spot A2 in Virtual World ]
-
Failures of Uncertainty Estimation on Out-Of-Distribution Samples: Experimental Results from Medical Applications Lead to Theoretical Insights (Workshop Poster) [ Visit Poster at Spot A3 in Virtual World ]
-
On Out-of-distribution Detection with Energy-Based Models (Workshop Poster) [ Visit Poster at Spot A4 in Virtual World ]
-
Deterministic Neural Networks with Inductive Biases Capture Epistemic and Aleatoric Uncertainty (Workshop Poster) [ Visit Poster at Spot A5 in Virtual World ]
-
Transfer and Marginalize: Explaining Away Label Noise with Privileged Information (Workshop Poster) [ Visit Poster at Spot A5 in Virtual World ]
-
Meta-Calibration: Meta-Learning of Model Calibration Using Differentiable Expected Calibration Error (Workshop Poster) [ Visit Poster at Spot A6 in Virtual World ]
-
Inferring Black Hole Properties from Astronomical Multivariate Time Series with Bayesian Attentive Neural Processes (Workshop Poster) [ Visit Poster at Spot B0 in Virtual World ]
-
Towards improving robustness of compressed CNNs (Workshop Poster) [ Visit Poster at Spot A6 in Virtual World ]
-
SAND-mask: An Enhanced Gradient Masking Strategy for Invariant Prediction in Domain Generalization (Workshop Poster) [ Visit Poster at Spot A6 in Virtual World ]
-
Efficient Gaussian Neural Processes for Regression (Workshop Poster) [ Visit Poster at Spot B0 in Virtual World ]
-
Simple, Attack-Agnostic Defense Against Targeted Training Set Attacks Using Cosine Similarity (Workshop Poster) [ Visit Poster at Spot A6 in Virtual World ]
-
Safety & Exploration: A Comparative Study of Uses of Uncertainty in Reinforcement Learning (Workshop Poster) [ Visit Poster at Spot B1 in Virtual World ]
-
Rethinking Function-Space Variational Inference in Bayesian Neural Networks (Workshop Poster) [ Visit Poster at Spot B1 in Virtual World ]
-
Understanding the Under-Coverage Bias in Uncertainty Estimation (Workshop Poster) [ Visit Poster at Spot B2 in Virtual World ]
-
BETH Dataset: Real Cybersecurity Data for Anomaly Detection Research (Workshop Poster) [ Visit Poster at Spot B0 in Virtual World ]
-
Mean Embeddings with Test-Time Data Augmentation for Ensembling of Representations (Workshop Poster) [ Visit Poster at Spot B1 in Virtual World ]
-
Deep Ensemble Uncertainty Fails as Network Width Increases: Why, and How to Fix It (Workshop Poster) [ Visit Poster at Spot B4 in Virtual World ]
-
Exact and Efficient Adversarial Robustness with Decomposable Neural Networks (Workshop Poster) [ Visit Poster at Spot B2 in Virtual World ]
-
Consistency Regularization for Training Confidence-Calibrated Classifiers (Workshop Poster) [ Visit Poster at Spot B0 in Virtual World ]
-
Diverse and Amortised Counterfactual Explanations for Uncertainty Estimates (Workshop Poster) [ Visit Poster at Spot B3 in Virtual World ]
-
Quantization of Bayesian neural networks and its effect on quality of uncertainty (Workshop Poster) [ Visit Poster at Spot B3 in Virtual World ]
-
Class-Distribution-Aware Calibration for Long-Tailed Visual Recognition (Workshop Poster) [ Visit Poster at Spot A0 in Virtual World ]
-
Bayesian Neural Networks with Soft Evidence (Workshop Poster) [ Visit Poster at Spot B4 in Virtual World ]
-
Anomaly Detection for Event Data with Temporal Point Processes (Workshop Poster) [ Visit Poster at Spot B3 in Virtual World ]
-
Batch Inverse-Variance Weighting: Deep Heteroscedastic Regression (Workshop Poster) [ Visit Poster at Spot B5 in Virtual World ]
-
An Empirical Study of Invariant Risk Minimization on Deep Models (Workshop Poster) [ Visit Poster at Spot B4 in Virtual World ]
-
A Bayesian Approach to Invariant Deep Neural Networks (Workshop Poster) [ Visit Poster at Spot B2 in Virtual World ]
-
Practical posterior Laplace approximation with optimization-driven second moment estimation (Workshop Poster) [ Visit Poster at Spot B6 in Virtual World ]
-
Variational Generative Flows for Reconstruction Uncertainty Estimation (Workshop Poster) [ Visit Poster at Spot C0 in Virtual World ]
-
Improving the Accuracy-Robustness Trade-Off for Dual-Domain Adversarial Training (Workshop Poster) [ Visit Poster at Spot B1 in Virtual World ]
-
Consistency Regularization Can Improve Robustness to Label Noise (Workshop Poster) [ Visit Poster at Spot B5 in Virtual World ]
-
Neural Variational Gradient Descent (Workshop Poster) [ Visit Poster at Spot B5 in Virtual World ]
-
Evaluating the Use of Reconstruction Error for Novelty Localization (Workshop Poster) [ Visit Poster at Spot B6 in Virtual World ]
-
Accuracy on the Line: on the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization (Workshop Poster) [ Visit Poster at Spot B2 in Virtual World ]
-
The Hidden Uncertainty in a Neural Network’s Activations (Workshop Poster) [ Visit Poster at Spot B6 in Virtual World ]
-
On the Calibration of Deterministic Epistemic Uncertainty (Workshop Poster) [ Visit Poster at Spot B3 in Virtual World ]
-
Objective Robustness in Deep Reinforcement Learning (Workshop Poster) [ Visit Poster at Spot C0 in Virtual World ]
-
Epistemic Uncertainty in Learning Chaotic Dynamical Systems (Workshop Poster) [ Visit Poster at Spot C1 in Virtual World ]
-
Towards Stochastic Neural Networks via Inductive Wasserstein Embeddings (Workshop Poster) [ Visit Poster at Spot C0 in Virtual World ]
-
Distribution-free uncertainty quantification for classification under label shift (Workshop Poster) [ Visit Poster at Spot C1 in Virtual World ]
-
How does a Neural Network's Architecture Impact its Robustness to Noisy Labels? (Workshop Poster) [ Visit Poster at Spot C2 in Virtual World ]
-
Top-label calibration (Workshop Poster) [ Visit Poster at Spot C2 in Virtual World ]
-
Learning to Align the Support of Distributions (Workshop Poster) [ Visit Poster at Spot C2 in Virtual World ]
-
Beyond First-Order Uncertainty Estimation with Evidential Models for Open-World Recognition (Workshop Poster) [ Visit Poster at Spot C3 in Virtual World ]
-
Revisiting Out-of-Distribution Detection: A Simple Baseline is Surprisingly Effective (Workshop Poster) [ Visit Poster at Spot C1 in Virtual World ]
-
Contrastive Predictive Coding for Anomaly Detection and Segmentation (Workshop Poster) [ Visit Poster at Spot C3 in Virtual World ]
-
Multi-headed Neural Ensemble Search (Workshop Poster) [ Visit Poster at Spot C4 in Virtual World ]
-
A variational approximate posterior for the deep Wishart process (Workshop Poster) [ Visit Poster at Spot C4 in Virtual World ]
-
What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel (What Are Effective Labels for Augmented Data? Improving Calibrat)
-
On Stein Variational Neural Network Ensembles (Workshop Poster) [ Visit Poster at Spot B4 in Virtual World ]
-
Uncertainty-Aware Boosted Ensembling in Multi-Modal Settings (Workshop Poster) [ Visit Poster at Spot C3 in Virtual World ]
-
RouBL: A computationally cheap way to go beyond mean-field variational inference (Workshop Poster) [ Visit Poster at Spot C4 in Virtual World ]
-
No True State-of-the-Art? OOD Detection Methods are Inconsistent across Datasets (Workshop Poster) [ Visit Poster at Spot B6 in Virtual World ]
-
Out-of-Distribution Generalization with Deep Equilibrium Models (Workshop Poster) [ Visit Poster at Spot C0 in Virtual World ]
-
Mixture Proportion Estimation and PU Learning: A Modern Approach (Workshop Poster) [ Visit Poster at Spot C1 in Virtual World ]
-
On The Dark Side Of Calibration For Modern Neural Networks (Workshop Poster) [ Visit Poster at Spot C5 in Virtual World ]
-
Domain Adaptation with Factorizable Joint Shift (Workshop Poster) [ Visit Poster at Spot C5 in Virtual World ]
-
Scaling Laws for the Out-of-Distribution Generalization of Image Classifiers (Workshop Poster) [ Visit Poster at Spot C5 in Virtual World ]
-
Learning Invariant Weights in Neural Networks (Workshop Poster) [ Visit Poster at Spot C6 in Virtual World ]
-
Relational Deep Reinforcement Learning and Latent Goals for Following Instructions in Temporal Logic (Workshop Poster) [ Visit Poster at Spot C6 in Virtual World ]
-
On the Effectiveness of Mode Exploration in Bayesian Model Averaging for Neural Networks (Workshop Poster) [ Visit Poster at Spot D0 in Virtual World ]
-
Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate (Workshop Poster) [ Visit Poster at Spot C6 in Virtual World ]
-
Detecting OODs as datapoints with High Uncertainty (Workshop Poster) [ Visit Poster at Spot C2 in Virtual World ]
-
Multi-task Transformation Learning for Robust Out-of-Distribution Detection (Workshop Poster) [ Visit Poster at Spot C3 in Virtual World ]
-
Directly Training Joint Energy-Based Models for Conditional Synthesis and Calibrated Prediction of Multi-Attribute Data (Workshop Poster) [ Visit Poster at Spot D0 in Virtual World ]
-
Deep Learning with Quantified Uncertainty for Free Electron Laser Scientific Facilities (Workshop Poster) [ Visit Poster at Spot D1 in Virtual World ]
-
On the reversed bias-variance tradeoff in deep ensembles (Workshop Poster) [ Visit Poster at Spot D0 in Virtual World ]
-
Robust Generalization of Quadratic Neural Networks via Function Identification (Workshop Poster) [ Visit Poster at Spot D1 in Virtual World ]
-
Exploring Corruption Robustness: Inductive Biases in Vision Transformers and MLP-Mixers (Workshop Poster) [ Visit Poster at Spot C4 in Virtual World ]
-
Deep Random Projection Outlyingness for Unsupervised Anomaly Detection (Workshop Poster) [ Visit Poster at Spot D2 in Virtual World ]
-
Deep Deterministic Uncertainty for Semantic Segmentation (Workshop Poster) [ Visit Poster at Spot D1 in Virtual World ]
-
Identifying Invariant and Sparse Predictors in High-dimensional Data (Workshop Poster) [ Visit Poster at Spot D2 in Virtual World ]
-
On Misclassification-Aware Smoothing for Robustness and Uncertainty Calibration (Workshop Poster) [ Visit Poster at Spot D3 in Virtual World ]
-
On Pitfalls in OoD Detection: Entropy Considered Harmful (Workshop Poster) [ Visit Poster at Spot D4 in Virtual World ]
-
PnPOOD : Out-Of-Distribution Detection for Text Classification via Plug andPlay Data Augmentation (Workshop Poster) [ Visit Poster at Spot C5 in Virtual World ]
-
Augmented Invariant Regularization (Workshop Poster) [ Visit Poster at Spot D3 in Virtual World ]
-
Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data (Workshop Poster) [ Visit Poster at Spot D4 in Virtual World ]
-
Improved Adversarial Robustness via Uncertainty Targeted Attacks (Workshop Poster) [ Visit Poster at Spot C6 in Virtual World ]
-
Notes on the Behavior of MC Dropout (Workshop Poster) [ Visit Poster at Spot D5 in Virtual World ]
-
Distribution-free Risk-controlling Prediction Sets (Workshop Poster) [ Visit Poster at Spot D6 in Virtual World ]
-
Stochastic Bouncy Particle Sampler for Bayesian Neural Networks (Workshop Poster) [ Visit Poster at Spot D2 in Virtual World ]
-
Novelty detection using ensembles with regularized disagreement (Workshop Poster) [ Visit Poster at Spot D5 in Virtual World ]
-
A Tale Of Two Long Tails (Workshop Poster) [ Visit Poster at Spot D3 in Virtual World ]
-
Defending against Adversarial Patches with Robust Self-Attention (Workshop Poster) [ Visit Poster at Spot D0 in Virtual World ]
-
Intrinsic uncertainties and where to find them (Workshop Poster) [ Visit Poster at Spot D4 in Virtual World ]
-
Dataset to Dataspace: A Topological-Framework to Improve Analysis of Machine Learning Model Performance (Workshop Poster) [ Visit Poster at Spot D1 in Virtual World ]
-
Analyzing And Improving Neural Networks By Generating Semantic Counterexamples Through Differentiable Rendering (Workshop Poster) [ Visit Poster at Spot D2 in Virtual World ]
-
Thinkback: Task-Specific Out-of-Distribution Detection (Workshop Poster) [ Visit Poster at Spot D6 in Virtual World ]
-
Relating Adversarially Robust Generalization to Flat Minima (Workshop Poster) [ Visit Poster at Spot D3 in Virtual World ]
-
Deep Quantile Aggregation (Workshop Poster) [ Visit Poster at Spot D4 in Virtual World ]
-
What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel (Workshop Poster) [ Visit Poster at Spot B5 in Virtual World ]

Author Information

Balaji Lakshminarayanan (Google Brain)
Dan Hendrycks (UC Berkeley)
Sharon Li (University of Wisconsin-Madison)
Jasper Snoek (Google Brain)
Silvia Chiappa (DeepMind)
Sebastian Nowozin (Microsoft Research)
Tom Dietterich (Oregon State University)

More from the Same Authors