Timezone: »

 
Workshop
Uncertainty and Robustness in Deep Learning
Sharon Yixuan Li · Dan Hendrycks · Thomas Dietterich · Balaji Lakshminarayanan · Justin Gilmer

Fri Jun 14 08:30 AM -- 06:00 PM (PDT) @ Hall B
Event URL: https://sites.google.com/view/udlworkshop2019/home »

There has been growing interest in rectifying deep neural network vulnerabilities. Challenges arise when models receive samples drawn from outside the training distribution. For example, a neural network tasked with classifying handwritten digits may assign high confidence predictions to cat images. Anomalies are frequently encountered when deploying ML models in the real world. Well-calibrated predictive uncertainty estimates are indispensable for many machine learning applications, such as self-driving cars and medical diagnosis systems. Generalization to unseen and worst-case inputs is also essential for robustness to distributional shift. In order to have ML models reliably predict in open environment, we must deepen technical understanding in the following areas: (1) learning algorithms that are robust to changes in input data distribution (e.g., detect out-of-distribution examples); (2) mechanisms to estimate and calibrate confidence produced by neural networks and (3) methods to improve robustness to adversarial and common corruptions, and (4) key applications for uncertainty such as in artificial intelligence (e.g., computer vision, robotics, self-driving cars, medical imaging) as well as broader machine learning tasks.

This workshop will bring together researchers and practitioners from the machine learning communities, and highlight recent work that contribute to address these challenges. Our agenda will feature contributed papers with invited speakers. Through the workshop we hope to help identify fundamentally important directions on robust and reliable deep learning, and foster future collaborations.

Fri 8:30 a.m. - 8:40 a.m. [iCal]
Welcome Video » 
Sharon Yixuan Li
Fri 8:40 a.m. - 9:30 a.m. [iCal]
Spotlight Video » 
Tyler Scott, Kiran Koshy, Jonathan Aigrain, Rene Bidart, Priyadarshini Panda, Dian Ang Yap, Yaniv Yacoby, Raphael Gontijo Lopes, Alberto Marchisio, Erik Englesson, Wanqian Yang, Moritz Graule, Yi Sun, Daniel Kang, Mike Dusenberry, Min Du, Hartmut Maennel, Kunal Menda, Vineet Edupuganti, Luke Metz, David Stutz, Vignesh Srinivasan, Timo Sämann, Vineeth N Balasubramanian, Sina Mohseni, Rob Cornish, Judith Butepage, Zhangyang Wang, Bai Li, Bo Han, Honglin Li, Maksym Andriushchenko, Lukas Ruff, Meet P. Vadera, Yaniv Ovadia, Sunil Thulasidasan, Disi Ji, Gang Niu, Saeed Mahloujifar, Aviral Kumar, SANGHYUK CHUN, Dong Yin, Joyce Xu Xu, Hugo Gomes, Raanan Rohekar
Fri 9:30 a.m. - 10:00 a.m. [iCal]
Video » 

We present a new family of exchangeable stochastic processes suitable for deep learning. Our nonparametric Bayesian method models distributions over functions by learning a graph of dependencies on top of latent representations of the points in the given dataset. In doing so, they define a Bayesian model without explicitly positing a prior distribution over latent global parameters; they instead adopt priors over the relational structure of the given dataset, a task that is much simpler. We show how we can learn such models from data, demonstrate that they are scalable to large datasets through mini-batch optimization and describe how we can make predictions for new points via their posterior predictive distribution. We experimentally evaluate FNPs on the tasks of toy regression and image classification and show that, when compared to baselines that employ global latent parameters, they offer both competitive predictions as well as more robust uncertainty estimates.

Max Welling
Fri 10:00 a.m. - 11:00 a.m. [iCal]
Poster Session 1 (all papers) (Poster)
Fri 11:00 a.m. - 11:30 a.m. [iCal]
Video » 

We investigate calibration for deep learning algorithms in classification and regression settings. Although we show that typically deep networks tend to be highly mis-calibrated, we demonstrate that this is easy to fix - either to obtain more trustworthy confidence estimates or to detect outliers in the data. Finally, we relate calibration with the recently raised tension between minimizing error disparity across different population groups while maintaining calibrated probability estimates. We show that calibration is compatible only with a single error constraint (i.e. equal false-negatives rates across groups), and show that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier. These unsettling findings, which extend and generalize existing results, are empirically confirmed on several datasets.

Kilian Weinberger
Fri 11:30 a.m. - 11:40 a.m. [iCal]
Video » 

Classifiers used in the wild, in particular for safety-critical systems, should know when they don’t know, in particular make low confidence predictions far away from the train- ing data. We show that ReLU type neural networks fail in this regard as they produce almost always high confidence predictions far away from the training data. For bounded domains we propose a new robust optimization technique similar to adversarial training which enforces low confidence pre- dictions far away from the training data. We show that this technique is surprisingly effective in reducing the confidence of predictions far away from the training data while maintaining high confidence predictions and test error on the original classification task com- pared to standard training. This is a short version of the corresponding CVPR paper.

Maksym Andriushchenko
Fri 11:40 a.m. - 11:50 a.m. [iCal]
Video » 

In this work, we explore principled methods for extrapolation detection. We define extrapolation as occurring when a model’s conclusion at a test point is underdetermined by the training data. Our metrics for detecting extrapolation are based on influence functions, inspired by the intuition that a point requires extrapolation if its inclusion in the training set would significantly change the model’s learned parameters. We provide interpretations of our methods in terms of the eigendecomposition of the Hessian. We present experimental evidence that our method is capable of identifying extrapolation to out-of-distribution points.

David Madras
Fri 11:50 a.m. - 12:00 p.m. [iCal]
Video » 

Neural networks can be highly sensitive to noise and perturbations. In this paper we suggest that high dimensional sparse representations can lead to increased robustness to noise and interference. A key intuition we develop is that the ratio of the match volume around a sparse vector divided by the total representational space decreases exponentially with dimensionality, leading to highly robust matching with low interference from other patterns. We analyze efficient sparse networks containing both sparse weights and sparse activations. Simulations on MNIST, the Google Speech Command Dataset, and CIFAR-10 show that such networks demonstrate improved robustness to random noise compared to dense networks, while maintaining competitive accuracy. We propose that sparsity should be a core design constraint for creating highly robust networks.

Subutai Ahmad
Fri 12:00 p.m. - 12:30 p.m. [iCal]
Keynote by Suchi Saria: Safety Challenges with Black-Box Predictors and Novel Learning Approaches for Failure Proofing (Keynote) Video » 
Suchi Saria
Fri 2:00 p.m. - 2:10 p.m. [iCal]
Video » 

Bayesian inference was once a gold standard for learning with neural networks, providing accurate full predictive distributions and well calibrated uncertainty. However, scaling Bayesian inference techniques to deep neural networks is challenging due to the high dimensionality of the parameter space. In this pa- per, we construct low-dimensional subspaces of parameter space that contain diverse sets of models, such as the first principal components of the stochastic gradient descent (SGD) trajectory. In these subspaces, we are able to apply elliptical slice sampling and variational inference, which struggle in the full parameter space. We show that Bayesian model averaging over the induced posterior in these subspaces produces high accurate predictions and well-calibrated predictive uncertainty for both regression and image classification.

Polina Kirichenko, Pavel Izmailov, Andrew Wilson
Fri 2:10 p.m. - 2:20 p.m. [iCal]
Video » 

Bayesian Neural Networks (BNNs) place priors over the parameters in a neural network. Inference in BNNs, however, is difficult; all inference methods for BNNs are approximate. In this work, we empirically compare the quality of predictive uncertainty estimates for 10 common inference methods on both regression and classification tasks. Our experiments demonstrate that commonly used metrics (e.g. test log-likelihood) can be misleading. Our experiments also indicate that inference innovations designed to capture structure in the posterior do not necessarily produce high quality posterior approximations.

Jiayu Yao
Fri 2:20 p.m. - 2:30 p.m. [iCal]
Video » 

We describe a limitation in the expressiveness of the predictive uncertainty estimate given by mean- field variational inference (MFVI), a popular approximate inference method for Bayesian neural networks. In particular, MFVI fails to give calibrated uncertainty estimates in between separated regions of observations. This can lead to catastrophically overconfident predictions when testing on out-of-distribution data. Avoiding such over-confidence is critical for active learning, Bayesian optimisation and out-of-distribution robustness. We instead find that a classical technique, the linearised Laplace approximation, can handle ‘in- between’ uncertainty much better for small network architectures.

Andrew Foong
Fri 2:30 p.m. - 3:00 p.m. [iCal]
Keynote by Dawn Song: Adversarial Machine Learning: Challenges, Lessons, and Future Directions (Keynote) Video » 
Dawn Song
Fri 3:30 p.m. - 4:00 p.m. [iCal]
Video » 

The first part of the talk will explore issues with deep networks dealing with "unknowns" inputs, and the general problems of open-set recognition in deep networks. We review the core of open-set recognition theory and its application in our first attempt at open-set deep networks, "OpenMax" We discuss is successes and limitations and why classic "open-set" approaches don't really solve the problem of deep unknowns. We then present our recent work from NIPS2018, on a new model we call the ObjectoSphere. Using ObjectoSphere loss begins to address the learning of deep features that can handle unknown inputs. We present examples of its use first on simple datasets sets (MNIST/CFAR) and then onto unpublished work applying it to the real-world problem of open-set face recognition. We discuss of the relationship between open set recognition theory and adversarial image generation, showing how our deep-feature adversarial approach, called LOTS can attack the first OpenMax solution, as well as successfully attack even open-set face recognition systems. We end with a discussion of how open set theory can be applied to improve network robustness.

Terry Boult
Fri 4:00 p.m. - 5:00 p.m. [iCal]
Panel Discussion (moderator: Tom Dietterich) (Panel) Video » 
Max Welling, Kilian Weinberger, Terry Boult, Dawn Song, Thomas Dietterich
Fri 5:00 p.m. - 6:00 p.m. [iCal]
Poster Session 2 (all papers) (Poster)

Author Information

Sharon Yixuan Li (Facebook AI)

Sharon Y. Li is currently a postdoc researcher in the Computer Science department at Stanford, working with Chris Ré. She will be joining the Computer Sciences Department at University of Wisconsin Madison as an assistant professor, starting in Fall 2020. Previously, she completed her PhD from Cornell University in 2017, where she was advised by John E. Hopcroft. Her thesis committee members are Kilian Q. Weinberger and Thorsten Joachims. She has spent time at Google AI twice as an intern, and Facebook AI as a Research Scientist. She was named Forbes 30 Under 30 in Science in 2020. Her principal research interests are in the algorithmic foundations of deep learning and its applications. Her time in both academia and industry has shaped my view and approach in research. She is particularly excited about developing open-world machine learning methods that can reduce human supervision during training, and enhance reliability during deployment.

Dan Hendrycks (UC Berkeley)
Tom Dietterich (Oregon State University)
Balaji Lakshminarayanan (Google DeepMind)
Justin Gilmer (Google Brain)

More from the Same Authors