Timezone: »

Loss Function Learning for Domain Generalization by Implicit Gradient
Boyan Gao · Henry Gouk · Yongxin Yang · Timothy Hospedales

Wed Jul 20 03:30 PM -- 05:30 PM (PDT) @ Hall E #425

Generalising robustly to distribution shift is a major challenge that is pervasive across most real-world applications of machine learning. A recent study highlighted that many advanced algorithms proposed to tackle such domain generalisation (DG) fail to outperform a properly tuned empirical risk minimisation (ERM) baseline. We take a different approach, and explore the impact of the ERM loss function on out-of-domain generalisation. In particular, we introduce a novel meta-learning approach to loss function search based on implicit gradient. This enables us to discover a general purpose parametric loss function that provides a drop-in replacement for cross-entropy. Our loss can be used in standard training pipelines to efficiently train robust models using any neural architecture on new datasets. The results show that it clearly surpasses cross-entropy, enables simple ERM to outperform some more complicated prior DG methods, and provides state-of-the-art performance across a variety of DG benchmarks. Furthermore, unlike most existing DG approaches, our setup applies to the most practical setting of single-source domain generalisation, on which we show significant improvement.

Author Information

Boyan Gao (University of Edinburgh)
Henry Gouk (University of Edinburgh)
Yongxin Yang (University of Edinburgh )
Timothy Hospedales (Samsung AI Centre / University of Edinburgh)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors