Poster
Estimating Generalization under Distribution Shifts via Domain-Invariant Representations
Ching-Yao Chuang · Antonio Torralba · Stefanie Jegelka
Keywords: [ Representation Learning ] [ Transfer and Multitask Learning ] [ Algorithms ] [ Transfer, Multitask and Meta-learning ]
When machine learning models are deployed on a test distribution different from the training distribution, they can perform poorly, but overestimate their performance. In this work, we aim to better estimate a model's performance under distribution shift, without supervision. To do so, we use a set of domain-invariant predictors as a proxy for the unknown, true target labels. Since the error of the resulting risk estimate depends on the target risk of the proxy model, we study generalization of domain-invariant representations and show that the complexity of the latent representation has a significant influence on the target risk. Empirically, our approach (1) enables self-tuning of domain adaptation models, and (2) accurately estimates the target error of given models under distribution shift. Other applications include model selection, deciding early stopping and error detection.