Timezone: »
Self-supervised learning is a powerful paradigm for representation learning on unlabelled images. A wealth of effective new methods based on instance matching rely on data-augmentation to drive learning, and these have reached a rough agreement on an augmentation scheme that optimises popular recognition benchmarks. However, there is strong reason to suspect that different tasks in computer vision require features to encode different (in)variances, and therefore likely require different augmentation strategies. In this paper, we measure the invariances learned by contrastive methods and confirm that they do learn invariance to the augmentations used and further show that this invariance largely transfers to related real-world changes in pose and lighting. We show that learned invariances strongly affect downstream task performance and confirm that different downstream tasks benefit from polar opposite (in)variances, leading to performance loss when the standard augmentation strategy is used. Finally, we demonstrate that a simple fusion of representations with complementary invariances ensures wide transferability to all the diverse downstream tasks considered.
Author Information
Linus Ericsson (University of Edinburgh)
Henry Gouk (University of Edinburgh)
Timothy Hospedales (Samsung AI Centre / University of Edinburgh)
More from the Same Authors
-
2022 : Attacking Adversarial Defences by Smoothing the Loss Landscape »
Panagiotis Eustratiadis · Henry Gouk · Da Li · Timothy Hospedales -
2022 : HyperInvariances: Amortizing Invariance Learning »
Ruchika Chavhan · Henry Gouk · Jan Stuehmer · Timothy Hospedales -
2022 : Feed-Forward Source-Free Latent Domain Adaptation via Cross-Attention »
Ondrej Bohdal · Da Li · Xu Hu · Timothy Hospedales -
2023 : Impact of Noise on Calibration and Generalisation of Neural Networks »
Martin Ferianc · Ondrej Bohdal · Timothy Hospedales · Miguel Rodrigues -
2023 : Evaluating the Evaluators: Are Current Few-Shot Learning Benchmarks Fit for Purpose? »
LuĂsa Shimabucoro · Timothy Hospedales · Henry Gouk -
2022 Poster: Loss Function Learning for Domain Generalization by Implicit Gradient »
Boyan Gao · Henry Gouk · Yongxin Yang · Timothy Hospedales -
2022 Poster: Fisher SAM: Information Geometry and Sharpness Aware Minimisation »
Minyoung Kim · Da Li · Xu Hu · Timothy Hospedales -
2022 Spotlight: Fisher SAM: Information Geometry and Sharpness Aware Minimisation »
Minyoung Kim · Da Li · Xu Hu · Timothy Hospedales -
2022 Spotlight: Loss Function Learning for Domain Generalization by Implicit Gradient »
Boyan Gao · Henry Gouk · Yongxin Yang · Timothy Hospedales -
2021 Poster: Weight-covariance alignment for adversarially robust neural networks »
Panagiotis Eustratiadis · Henry Gouk · Da Li · Timothy Hospedales -
2021 Spotlight: Weight-covariance alignment for adversarially robust neural networks »
Panagiotis Eustratiadis · Henry Gouk · Da Li · Timothy Hospedales -
2019 Poster: Analogies Explained: Towards Understanding Word Embeddings »
Carl Allen · Timothy Hospedales -
2019 Oral: Analogies Explained: Towards Understanding Word Embeddings »
Carl Allen · Timothy Hospedales -
2019 Poster: Feature-Critic Networks for Heterogeneous Domain Generalization »
Yiying Li · Yongxin Yang · Wei Zhou · Timothy Hospedales -
2019 Oral: Feature-Critic Networks for Heterogeneous Domain Generalization »
Yiying Li · Yongxin Yang · Wei Zhou · Timothy Hospedales