Skip to yearly menu bar Skip to main content


Breakout Session [Hall 326B]
in
Affinity Workshop: 4th Women in Machine Learning (WiML) Un-Workshop

Robustness in Machine Learning

Yao Qin · Qi Lei


Abstract:

There are many robustness issues arising in a variety of forms when deploying ML systems in the real world. For example, neural networks suffer from sensitivity to distributional shift, when a model is tested on a data distribution different from what it was trained on. Such a shift is frequently encountered in practical deployments and can lead to a substantial degradation in performance. In addition, neural networks are vulnerable to adversarial examples - small perturbations to the input can successfully fool classifiers into making incorrect predictions. In this section, we will develop a deeper understanding of different robustness issues and discuss how to effectively enhance models' robustness.

Chat is not available.