Timezone: »

 
Workshop
Uncertainty and Robustness in Deep Learning Workshop (UDL)
Sharon Yixuan Li · Balaji Lakshminarayanan · Dan Hendrycks · Thomas Dietterich · Jasper Snoek

Fri Jul 17 07:30 AM -- 04:00 PM (PDT) @
Event URL: https://sites.google.com/view/udlworkshop2020/home »

There has been growing interest in rectifying deep neural network instabilities. Challenges arise when models receive samples drawn from outside the training distribution. For example, a neural network tasked with classifying handwritten digits may assign high confidence predictions to cat images. Anomalies are frequently encountered when deploying ML models in the real world. Well-calibrated predictive uncertainty estimates are indispensable for many machine learning applications, such as self-driving vehicles and medical diagnosis systems. Generalization to unseen and worst-case inputs is also essential for robustness to distributional shift. In order to have ML models reliably predict in open environment, we must deepen technical understanding in the emerging areas of: (1) learning algorithms that can detect changes in data distribution (e.g. out-of-distribution examples); (2) mechanisms to estimate and calibrate confidence produced by neural networks in typical and unforeseen scenarios; (3) methods to improve out-of-distribution generalization, including generalization to temporal, geographical, hardware, adversarial, and image-quality changes; (4) benchmark datasets and protocols for evaluating model performance under distribution shift; and (5) key applications of robust and uncertainty-aware deep learning (e.g., computer vision, robotics, self-driving vehicles, medical imaging) as well as broader machine learning tasks.

This workshop will bring together researchers and practitioners from the machine learning communities, and highlight recent work that contributes to addressing these challenges. Our agenda will feature contributed papers with invited speakers. Through the workshop we hope to help identify fundamentally important directions on robust and reliable deep learning, and foster future collaborations.

Author Information

Sharon Yixuan Li (Stanford University)

Sharon Y. Li is currently a postdoc researcher in the Computer Science department at Stanford, working with Chris Ré. She will be joining the Computer Sciences Department at University of Wisconsin Madison as an assistant professor, starting in Fall 2020. Previously, she completed her PhD from Cornell University in 2017, where she was advised by John E. Hopcroft. Her thesis committee members are Kilian Q. Weinberger and Thorsten Joachims. She has spent time at Google AI twice as an intern, and Facebook AI as a Research Scientist. She was named Forbes 30 Under 30 in Science in 2020. Her principal research interests are in the algorithmic foundations of deep learning and its applications. Her time in both academia and industry has shaped my view and approach in research. She is particularly excited about developing open-world machine learning methods that can reduce human supervision during training, and enhance reliability during deployment.

Balaji Lakshminarayanan (Google Brain)
Dan Hendrycks (UC Berkeley)
Thomas Dietterich (Oregon State University)
Jasper Snoek (Google Brain)

More from the Same Authors