Timezone: »
Providing invariances in a given learning task conveys a key inductive bias that can lead to sample-efficient learning and good generalisation, if correctly specified. However, the ideal invariances for many problems of interest are often not known, which has led both to a body of engineering lore as well as attempts to provide frameworks for invariance learning. However, invariance learning is expensive and data intensive for popular neural architectures. We introduce the notion of amortizing invariance learning. In an up-front learning phase, we learn a low-dimensional manifold of feature extractors spanning invariance to different transformations using a hyper-network. Then, for any problem of interest, both model and invariance learning are rapid and efficient by fitting a low-dimensional invariance descriptor an output head. Empirically, this framework can identify appropriate invariances in different downstream tasks and lead to comparable or better test performance than conventional approaches. Our HyperInvariance framework is also theoretically appealing as it enables generalisation-bounds that provide an interesting new operating point in the trade-off between model fit and complexity.
Author Information
Ruchika Chavhan (School of Informatics, University of Edinburgh)
Henry Gouk (University of Edinburgh)
Jan Stuehmer (Samsung Research)
Timothy Hospedales (Samsung AI Centre / University of Edinburgh)
More from the Same Authors
-
2022 : Attacking Adversarial Defences by Smoothing the Loss Landscape »
Panagiotis Eustratiadis · Henry Gouk · Da Li · Timothy Hospedales -
2022 : Feed-Forward Source-Free Latent Domain Adaptation via Cross-Attention »
Ondrej Bohdal · Da Li · Xu Hu · Timothy Hospedales -
2023 : Impact of Noise on Calibration and Generalisation of Neural Networks »
Martin Ferianc · Ondrej Bohdal · Timothy Hospedales · Miguel Rodrigues -
2023 : Evaluating the Evaluators: Are Current Few-Shot Learning Benchmarks Fit for Purpose? »
LuĂsa Shimabucoro · Timothy Hospedales · Henry Gouk -
2023 : Why Do Self-Supervised Models Transfer? On Data Augmentation and Feature Properties »
Linus Ericsson · Henry Gouk · Timothy Hospedales -
2022 Poster: Loss Function Learning for Domain Generalization by Implicit Gradient »
Boyan Gao · Henry Gouk · Yongxin Yang · Timothy Hospedales -
2022 Poster: Fisher SAM: Information Geometry and Sharpness Aware Minimisation »
Minyoung Kim · Da Li · Xu Hu · Timothy Hospedales -
2022 Spotlight: Fisher SAM: Information Geometry and Sharpness Aware Minimisation »
Minyoung Kim · Da Li · Xu Hu · Timothy Hospedales -
2022 Spotlight: Loss Function Learning for Domain Generalization by Implicit Gradient »
Boyan Gao · Henry Gouk · Yongxin Yang · Timothy Hospedales -
2021 Poster: Weight-covariance alignment for adversarially robust neural networks »
Panagiotis Eustratiadis · Henry Gouk · Da Li · Timothy Hospedales -
2021 Spotlight: Weight-covariance alignment for adversarially robust neural networks »
Panagiotis Eustratiadis · Henry Gouk · Da Li · Timothy Hospedales -
2019 Poster: Analogies Explained: Towards Understanding Word Embeddings »
Carl Allen · Timothy Hospedales -
2019 Oral: Analogies Explained: Towards Understanding Word Embeddings »
Carl Allen · Timothy Hospedales -
2019 Poster: Feature-Critic Networks for Heterogeneous Domain Generalization »
Yiying Li · Yongxin Yang · Wei Zhou · Timothy Hospedales -
2019 Oral: Feature-Critic Networks for Heterogeneous Domain Generalization »
Yiying Li · Yongxin Yang · Wei Zhou · Timothy Hospedales