Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ICML 2024 Workshop on Foundation Models in the Wild

An Empirical Study into Clustering of Unseen Datasets with Self-Supervised Foundation Models

Scott C. Lowe · Joakim Haurum · Sageev Oore · Thomas Moeslund · Graham Taylor

Keywords: [ domain shift ] [ foundation models ] [ Empirical ] [ Clustering ] [ Computer Vision ] [ self-supervised learning ] [ benchmark ] [ SSL ] [ images ]


Abstract:

Can foundation models generalize to new datasets outside their training domain, without any retraining? Our suite of benchmarking experiments use encoders pretrained solely on ImageNet-1k with either supervised or self-supervised training techniques, clustering image datasets that were not seen during training with conventional clustering algorithms. This evaluation allows us to investigate the impact of the pretraining protocol on a model's ability to generalize outside its training domain, and explore what is natively prioritized by the model in its embeddings in a real-world scenario where novel data lacks labels. We find supervised encoders typically offer more utility than SSL encoders within the training domain, and vice-versa far outside of it, however, fine-tuned SSL encoders demonstrate the opposite trend.

Chat is not available.