Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Second Workshop on Spurious Correlations, Invariance and Stability

Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning

Yu Yang · Besmira Nushi · Hamid Palangi · Baharan Mirzasoleiman


Abstract:

Mitigating spurious correlations during pre-training for large-scale multi-modal models can be costly and impractical. This paper proposes a novel approach to address spurious correlations during fine-tuning for a given domain of interest. With a focus on multi-modal models (e.g., CLIP), the proposed method leverages different modalities in these models to detect and explicitly set apart spurious attributes from the affected class, achieved through a multi-modal contrastive loss function that expresses spurious relationships through language. Our experimental results and in-depth visualizations on CLIP show that such an intervention can effectively i) improve the model's accuracy when spurious attributes are not present, and ii) directs the model's activation maps towards the actual class rather than the spurious attribute when present. In particular, on the Waterbirds dataset, our algorithm achieved a worst-group accuracy 23\% higher than ERM on CLIP with a ResNet-50 backbone, and 32\% higher on CLIP with a ViT backbone, while maintaining the same average accuracy as ERM.

Chat is not available.