Workshop
Uncertainty and Robustness in Deep Learning
Yixuan Li · Dan Hendrycks · Thomas Dietterich · Balaji Lakshminarayanan · Justin Gilmer

Fri Jun 14th 08:30 AM -- 06:00 PM @ Hall B
Event URL: https://sites.google.com/view/udlworkshop2019/home »

There has been growing interest in rectifying deep neural network vulnerabilities. Challenges arise when models receive samples drawn from outside the training distribution. For example, a neural network tasked with classifying handwritten digits may assign high confidence predictions to cat images. Anomalies are frequently encountered when deploying ML models in the real world. Well-calibrated predictive uncertainty estimates are indispensable for many machine learning applications, such as self-driving cars and medical diagnosis systems. Generalization to unseen and worst-case inputs is also essential for robustness to distributional shift. In order to have ML models reliably predict in open environment, we must deepen technical understanding in the following areas: (1) learning algorithms that are robust to changes in input data distribution (e.g., detect out-of-distribution examples); (2) mechanisms to estimate and calibrate confidence produced by neural networks and (3) methods to improve robustness to adversarial and common corruptions, and (4) key applications for uncertainty such as in artificial intelligence (e.g., computer vision, robotics, self-driving cars, medical imaging) as well as broader machine learning tasks.

This workshop will bring together researchers and practitioners from the machine learning communities, and highlight recent work that contribute to address these challenges. Our agenda will feature contributed papers with invited speakers. Through the workshop we hope to help identify fundamentally important directions on robust and reliable deep learning, and foster future collaborations.

08:30 AM Welcome <span> <a href="#"> <span class="maincard_media maincard_media_Video"> <a href="https://videoken.com/embed/oU8KWOjiYu0" class="btn btn-default btn-xs href_Video" title="Video"><i class="fa fa-video-camera"></i> Video &raquo;</a>&nbsp; </span></a> </span> Sharon Li
08:40 AM Spotlight <span> <a href="#"> <span class="maincard_media maincard_media_Video"> <a href="https://videoken.com/embed/oU8KWOjiYu1" class="btn btn-default btn-xs href_Video" title="Video"><i class="fa fa-video-camera"></i> Video &raquo;</a>&nbsp; </span></a> </span>
Tyler Scott, Kiran Koshy, Jonathan Aigrain, Rene Bidart, Priyadarshini Panda, Dian Ang Yap, Yaniv Yacoby, Raphael Gontijo Lopes, Alberto Marchisio, Erik Englesson, Wanqian Yang, Moritz Graule, Yi Sun, Daniel Kang, Mike Dusenberry, Min Du, Hartmut Maennel, Kunal Menda, Vineet Edupuganti, Luke Metz, David Stutz, Vignesh Srinivasan, Timo Sämann, Vineeth N Balasubramanian, Sina Mohseni, Rob Cornish, Judith Butepage, Zhangyang Wang, Bai Li, Bo Han, Honglin Li, Maksym Andriushchenko, Lukas Ruff, Meet P. Vadera, Yaniv Ovadia, Sunil Thulasidasan, Disi Ji, Gang Niu, Saeed Mahloujifar, Aviral Kumar, SANGHYUK CHUN, Dong Yin, Joyce Xu Xu, Hugo Gomes, Raanan Rohekar
09:30 AM Keynote by Max Welling: A Nonparametric Bayesian Approach to Deep Learning (without GPs) (Keynote) Video »  Max Welling
10:00 AM Poster Session 1 (all papers) (Poster)
11:00 AM Keynote by Kilian Weinberger: On Calibration and Fairness (Keynote) Video »  Kilian Weinberger
11:30 AM Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem (Contributed talk) Video »  Maksym Andriushchenko
11:40 AM Detecting Extrapolation with Influence Functions (Contributed talk) Video »  David Madras
11:50 AM How Can We Be So Dense? The Robustness of Highly Sparse Representations (Contributed talk) Video »  Subutai Ahmad
12:00 PM Keynote by Suchi Saria: Safety Challenges with Black-Box Predictors and Novel Learning Approaches for Failure Proofing (Keynote) Video »  Suchi Saria
02:00 PM Subspace Inference for Bayesian Deep Learning (Contributed talk) Video »  Polina Kirichenko, Pavel Izmailov, Andrew Wilson
02:10 PM Quality of Uncertainty Quantification for Bayesian Neural Network Inference (Contributed talk) Video »  Jiayu Yao
02:20 PM 'In-Between' Uncertainty in Bayesian Neural Networks (Contributed talk) Video »  Andrew Foong
02:30 PM Keynote by Dawn Song: Adversarial Machine Learning: Challenges, Lessons, and Future Directions (Keynote) Video »  Dawn Song
03:30 PM Keynote by Terrance Boult: The Deep Unknown: on Open-set and Adversarial Examples in Deep Learning (Keynote) Video »  Terry Boult
04:00 PM Panel Discussion (moderator: Tom Dietterich) (Panel) Video »  Max Welling, Kilian Weinberger, Terry Boult, Dawn Song, Thomas Dietterich
05:00 PM Poster Session 2 (all papers) (Poster)

Author Information

Sharon Li (Facebook AI)

Sharon Y. Li is currently a postdoc researcher in the Computer Science department at Stanford, working with Chris Ré. She will be joining the Computer Sciences Department at University of Wisconsin Madison as an assistant professor, starting in Fall 2020. Previously, she completed her PhD from Cornell University in 2017, where she was advised by John E. Hopcroft. Her thesis committee members are Kilian Q. Weinberger and Thorsten Joachims. She has spent time at Google AI twice as an intern, and Facebook AI as a Research Scientist. She was named Forbes 30 Under 30 in Science in 2020. Her principal research interests are in the algorithmic foundations of deep learning and its applications. Her time in both academia and industry has shaped my view and approach in research. She is particularly excited about developing open-world machine learning methods that can reduce human supervision during training, and enhance reliability during deployment.

Dan Hendrycks (UC Berkeley)
Tom Dietterich (Oregon State University)
Balaji Lakshminarayanan (Google DeepMind)
Justin Gilmer (Google Brain)

More from the Same Authors