Domain Adaptation with Adaptive $f$-Divergence: Tighter Variational Representation and Generalization Bounds
Zhe Cheng ⋅ Fode Zhang ⋅ Lingrui Wang ⋅ Yifan Zhu ⋅ Jiaolong Wang
Abstract
We study unsupervised domain adaptation (UDA) where measuring cross-domain discrepancy is critical. Most UDA approaches fix a single $f$-divergence a priori, which can be suboptimal across heterogeneous shifts. We propose a framework that (i) tightens the variational lower bound of an $f$-divergence by inserting a learnable, monotone $L$-Lipschitz transform $\tau$ (Tighter-VR), and (ii) selects the divergence family adaptively from data via a likelihood-based criterion. The resulting estimator yields more informative and statistically efficient discrepancy estimates while recovering prior fixed-divergence methods as special cases. Theoretically, we derive a target-risk bound whose three components are a transformed source risk, a Tighter-VR discrepancy between domains, and an ideal-hypothesis residual; we further provide finite-sample guarantees using standard complexity measures. Empirically, on Office-31, Office-Home, Digits, and VisDA-2017, our method consistently improves accuracy over strong baselines, demonstrating that coupling Tighter-VR with adaptive divergence selection yields tangible gains in UDA.
Successful Page Load