Timezone: »
Domain adaptation algorithms and theory have relied upon an assumption that the observed data uniquely specify the correct correspondence between the domains. Unfortunately, it is unclear under what conditions this identifiability assumption holds, even when restricting ourselves to the case where a correct bijective map between domains exists. We study this bijective domain mapping problem and provide several new sufficient conditions for the identifiability of linear domain maps. As a consequence of our analysis, we show that weak constraints on the third moment tensor suffice for identifiability, prove identifiability for common latent variable models such as topic models, and give a computationally tractable method for generating certificates for the identifiability of linear maps. Inspired by our certification method, we derive a new objective function for domain mapping that explicitly accounts for uncertainty over maps arising from unidentifiability. We demonstrate that our objective leads to improvements in uncertainty quantification and model performance estimation.
Author Information
Ishaan Gulrajani (Stanford)
Tatsunori Hashimoto (Stanford)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Identifiability Conditions for Domain Adaptation »
Wed. Jul 20th 03:25 -- 03:30 PM Room Hall F
More from the Same Authors
-
2023 : Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks »
Daniel Kang · Xuechen Li · Ion Stoica · Carlos Guestrin · Matei Zaharia · Tatsunori Hashimoto -
2023 Poster: Data Feedback Loops: Model-driven Amplification of Dataset Biases »
Rohan Taori · Tatsunori Hashimoto -
2023 Poster: Coder Reviewer Reranking for Code Generation »
Tianyi Zhang · Tao Yu · Tatsunori Hashimoto · Mike Lewis · Scott Yih · Daniel Fried · Sida Wang -
2023 Poster: Whose Opinions Do Language Models Reflect? »
Shibani Santurkar · Esin Durmus · Faisal Ladhak · Cinoo Lee · Percy Liang · Tatsunori Hashimoto -
2023 Oral: Data Feedback Loops: Model-driven Amplification of Dataset Biases »
Rohan Taori · Tatsunori Hashimoto -
2023 Oral: Whose Opinions Do Language Models Reflect? »
Shibani Santurkar · Esin Durmus · Faisal Ladhak · Cinoo Lee · Percy Liang · Tatsunori Hashimoto -
2023 Oral: Evaluating Self-Supervised Learning via Risk Decomposition »
Yann Dubois · Tatsunori Hashimoto · Percy Liang -
2023 Poster: Evaluating Self-Supervised Learning via Risk Decomposition »
Yann Dubois · Tatsunori Hashimoto · Percy Liang -
2023 Poster: Out-of-Domain Robustness via Targeted Augmentations »
Irena Gao · Shiori Sagawa · Pang Wei Koh · Tatsunori Hashimoto · Percy Liang