Timezone: »

 
Contrastive Adapters for Foundation Model Group Robustness
Michael Zhang · Christopher Re
Event URL: https://openreview.net/forum?id=JZP2U_0RTee »

While large pretrained foundation models (FMs) have shown remarkable zero-shot classification robustness to dataset-level distribution shifts, their robustness to group shifts is relatively underexplored. We study this problem, and first find that popular FMs such as CLIP may not be robust to various group shifts. On prior robustness benchmarks, they achieve up to an 80.7 percentage point (pp) gap between average and worst-group accuracy. Unfortunately, current methods to improve robustness require retraining, which can be prohibitively expensive for large FMs. We find existing ways to efficiently improve large model inference, e.g., by training adapters (lightweight MLPs) on top of FM embeddings, can also hurt group robustness compared to zero-shot. We thus propose a first adapter training method designed to improve FM robustness to group shifts. While prior work only trains adapters with class labels, we add a contrastive objective to explicitly learn similar embeddings for initially dissimilar FM embeddings. Across the same benchmarks, contrastive adapting effectively and efficiently improves group robustness, raising worst-group accuracy by 16.0 to 56.0 pp over zero-shot without any FM finetuning. Beyond FM robustness, contrastive adapting achieves near-state-of-the-art robustness on Waterbirds and CelebA, while only training 1% of other methods' model parameters.

Author Information

Michael Zhang (Stanford University)
Christopher Re (Stanford University)

More from the Same Authors