Timezone: »
The well known domain shift issue causes model performance to degrade when deployed to a new target domain with different statistics to training. Domain adaptation techniques alleviate this, but need some instances from the target domain to drive adaptation. Domain generalisation is the recently topical problem of learning a model that generalises to unseen domains out of the box, and various approaches aim to train a domain-invariant feature extractor, typically by adding some manually designed losses. In this work, we propose a learning to learn approach, where the auxiliary loss that helps generalisation is itself learned. Beyond conventional domain generalisation, we consider a more challenging setting of heterogeneous domain generalisation, where the unseen domains do not share label space with the seen ones, and the goal is to train a feature representation that is useful off-the-shelf for novel data and novel categories. Experimental evaluation demonstrates that our method outperforms state-of-the-art solutions in both settings.
Author Information
Yiying Li (National University of Defense Technology)
Yongxin Yang (University of Edinburgh )
Wei Zhou (National University of Defense Technology)
Timothy Hospedales (Samsung AI Centre / University of Edinburgh)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Feature-Critic Networks for Heterogeneous Domain Generalization »
Tue. Jun 11th 10:00 -- 10:05 PM Room Grand Ballroom
More from the Same Authors
-
2022 : Attacking Adversarial Defences by Smoothing the Loss Landscape »
Panagiotis Eustratiadis · Henry Gouk · Da Li · Timothy Hospedales -
2022 : HyperInvariances: Amortizing Invariance Learning »
Ruchika Chavhan · Henry Gouk · Jan Stuehmer · Timothy Hospedales -
2022 : Feed-Forward Source-Free Latent Domain Adaptation via Cross-Attention »
Ondrej Bohdal · Da Li · Xu Hu · Timothy Hospedales -
2023 : Impact of Noise on Calibration and Generalisation of Neural Networks »
Martin Ferianc · Ondrej Bohdal · Timothy Hospedales · Miguel Rodrigues -
2023 : Evaluating the Evaluators: Are Current Few-Shot Learning Benchmarks Fit for Purpose? »
LuĂsa Shimabucoro · Timothy Hospedales · Henry Gouk -
2023 : Why Do Self-Supervised Models Transfer? On Data Augmentation and Feature Properties »
Linus Ericsson · Henry Gouk · Timothy Hospedales -
2022 Poster: Loss Function Learning for Domain Generalization by Implicit Gradient »
Boyan Gao · Henry Gouk · Yongxin Yang · Timothy Hospedales -
2022 Poster: Fisher SAM: Information Geometry and Sharpness Aware Minimisation »
Minyoung Kim · Da Li · Xu Hu · Timothy Hospedales -
2022 Spotlight: Fisher SAM: Information Geometry and Sharpness Aware Minimisation »
Minyoung Kim · Da Li · Xu Hu · Timothy Hospedales -
2022 Spotlight: Loss Function Learning for Domain Generalization by Implicit Gradient »
Boyan Gao · Henry Gouk · Yongxin Yang · Timothy Hospedales -
2021 Poster: Weight-covariance alignment for adversarially robust neural networks »
Panagiotis Eustratiadis · Henry Gouk · Da Li · Timothy Hospedales -
2021 Spotlight: Weight-covariance alignment for adversarially robust neural networks »
Panagiotis Eustratiadis · Henry Gouk · Da Li · Timothy Hospedales -
2019 Poster: Analogies Explained: Towards Understanding Word Embeddings »
Carl Allen · Timothy Hospedales -
2019 Oral: Analogies Explained: Towards Understanding Word Embeddings »
Carl Allen · Timothy Hospedales