Timezone: »

 
Spotlight
Data Determines Distributional Robustness in Contrastive Language Image Pre-training
Alex Fang · Vaishaal Shankar · Achal Dave · Yuhao Wan · Gabriel Ilharco · Mitchell Wortsman · Ludwig Schmidt

Wed Jul 20 11:10 AM -- 11:15 AM (PDT) @ None

Contrastively trained image-text models such as CLIP, ALIGN, and BASIC have demonstrated unprecedented robustness to multiple challenging natural distribution shifts. Since these image-text models differ from previous training approaches in several ways, an important question is what causes the large robustness gains.We answer this question via a systematic experimental investigation.Concretely, we study five different possible causes for the robustness gains: (i) the training set size, (ii) the training distribution, (iii) language supervision at training time, (iv) language supervision at test time, and (v) the contrastive loss function. Our experiments show that the more diverse training distribution is the main cause for the robustness gains, with the other factors contributing little to no robustness. Beyond our experimental results, we also introduce ImageNet-Captions, a version of ImageNet with original text annotations from Flickr, to enable further controlled experiments of language-image training.

Author Information

Alex Fang (University of Washington)
Vaishaal Shankar (Amazon)
Achal Dave (Carnegie Mellon University)
Yuhao Wan (University of Washington, Seattle)
Gabriel Ilharco (University of Washington)
Mitchell Wortsman (University of Washington)
Ludwig Schmidt (Toyota Research Institute)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors