Timezone: »

Improving Robustness to Distribution Shifts: Methods and Benchmarks
Shiori Sagawa

Fri Jul 23 11:45 AM -- 12:15 PM (PDT) @ None
Event URL: https://cs.stanford.edu/~ssagawa/assets/slides/UDL_2021_ShioriSagawa.pdf »

Machine learning models deployed in the real world constantly face distribution shifts, yet current models are not robust to these shifts; they can perform well when the train and test distributions are identical, but still have their performance plummet when evaluated on a different test distribution. In this talk, I will discuss methods and benchmarks for improving robustness to distribution shifts. First, we consider the problem of spurious correlations and show how to mitigate it with a combination of distributionally robust optimization (DRO) and controlling model complexity---e.g., through strong L2 regularization, early stopping, or underparameterization. Second, we present WILDS, a curated and diverse collection of 10 datasets with real-world distribution shifts, that aims to address the under-representation of real-world shifts in the datasets widely used in the ML community today. We observe that existing methods fail to mitigate performance drops due to these distribution shifts, underscoring the need for new training methods that produce models which are more robust to the types of distribution shifts that arise in practice.

Author Information

Shiori Sagawa (Stanford University)

More from the Same Authors