Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning for Data: Automated Creation, Privacy, Bias

Towards Principled Disentanglement for Domain Generalization

Hanlin Zhang · Yi-Fan Zhang · Weiyang Liu · Adrian Weller · Bernhard Schölkopf · Eric Xing

Keywords: [ Architectures ]


Abstract:

It is fundamentally challenging for machine learning models to generalize to out-of-distribution data, in part due to spurious correlations. We first give a principled analysis by bounding the generalization risk on any unseen domain. Drawing inspiration from this risk upper bound, we propose a novel Disentangled representation learning method for Domain Generalization (DDG). In contrast to traditional approaches based on domain adversarial training and domain labels, DDG jointly learns semantic and variation encoders for disentanglement while employing strong regularizations from minimizing domain divergence and promoting semantic invariance. Our method is able to effectively disentangle semantic and variation factors. Such a disentanglement enables us to easily manipulate and augment the training data. Leveraging the augmented training data, DDG learns intrinsic representations of semantic concepts that are invariant to nuisance factors and generalize across different domains. Comprehensive experiments on a number of benchmarks show that DDG can achieve state-of-the-art performance on the task of domain generalization and uncover interpretable salient structure within data.

Chat is not available.