Timezone: »

Models Out of Line: A Fourier Lens on Distribution Shift Robustness
Sara Fridovich-Keil · Brian Bartoldson · James Diffenderfer · Bhavya Kailkhura · Peer-Timo Bremer

Improving the accuracy of deep neural networks (DNNs) on out-of-distribution (OOD) data is critical to an acceptance of deep learning (DL) in real world applications. It has been observed that accuracies on in-distribution (ID) versus OOD data follow a linear trend and models that outperform this baseline are exceptionally rare (and referred to as effectively robust”). Recently, some promising approaches have been developed to improve OOD robustness, in particular ensembling large pretrained models like CLIP. However, there is still no clear understanding of which model properties are required to produce effective robustness. We approach this issue by conducting an empirical study of robust models on a broad range of natural and synthetic distribution shifts of ImageNet. In particular, we view theeffective robustness puzzle" through a Fourier lens and ask how spectral properties of models influence the corresponding effective robustness. We find this Fourier lens offers some insight into why certain robust models, particularly those from the CLIP family, achieve OOD robustness.