Skip to yearly menu bar Skip to main content


Oral
in
Workshop: The Second Workshop on Spurious Correlations, Invariance and Stability

Provable domain adaptation using privileged information

Adam Breitholtz · Anton Matsson · Fredrik Johansson


Abstract:

Successful unsupervised domain adaptation is guaranteed only under strong assumptions such as covariate shift and overlap between input domains. The latter is often violated in high-dimensional applications such as image classification which, despite this challenge, continues to serve as inspiration and benchmark for algorithm development. In this work, we show that access to side information about examples from the source and target domains can help relax sufficient assumptions on input variables and increase sample efficiency at the cost of collecting a richer variable set. We call this unsupervised domain adaptation by learning using privileged information (DALUPI). Tailored for this task, we propose algorithms for both multi-class and multi-label classification tasks. In our experiments we demonstrate that incorporating privileged information in learning can reduce errors in domain transfer and increase sample efficiency compared to classical learning.

Chat is not available.