Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Principles of Distribution Shift (PODS)

A Meta-Analysis of Distributionally Robust Models

Benjamin Feuer · Ameya Joshi · Chinmay Hegde


Abstract:

State-of-the-art image classifiers trained on massive datasets (such as ImageNet) have been shown to be vulnerable to a range of both intentional and incidental distribution shifts. On the other hand, several recent classifiers with favorable out-of-distribution (OOD) robustness properties have emerged, achieving very accuracy on their target tasks while maintaining their in-distribution accuracy on challenging benchmarks. We present a meta-analysis on a wide range of publicly released models, most of which have been published over the last twelve months. Through this meta-analysis, we empirically identify four main commonalities for all the best-performing OOD-robust models, all of which illuminate the considerable promise of vision-language pre-training.

Chat is not available.