Rethinking the Flow-based Gradual Domain Adaption: A Semi-Dual Optimal Transport Perspective
Zhichao Chen ⋅ Zhan Zhuang ⋅ Yunfei Teng ⋅ Eric Wang ⋅ Fangyikang Wang ⋅ Zhengnan Li ⋅ Tianqiao Liu ⋅ Haoxuan Li ⋅ Zhouchen Lin
Abstract
Gradual Domain Adaption (GDA) aims to mitigate domain shift by progressively adapting models from the source domain to the target domain via intermediate domains. However, real intermediate domains are often unavailable or ineffective, necessitating the synthesis of intermediate samples. Flow-based models are recently used for this purpose by interpolating between source and target distributions, but their training typically resorts to sample-based log-likelihood estimation, which can discard useful information and thus degrade GDA performance. The key to addressing this limitation is constructing the intermediate domains via samples directly. To this end, we propose an $\underline{\text{E}}$ntropy-regularized $\underline{\text{S}}$emi-dual $\underline{\text{U}}$nbalanced $\underline{\text{O}}$ptimal $\underline{\text{T}}$ransport (E-SUOT) framework to construct intermediate domains. Specifically, we reformulate flow-based GDA as a Lagrangian dual problem and derive an equivalent objective that circumvents the needs for likelihood estimation. However, the dual problem results in the unstable min–max training procedure. To alleviate this issue, we further introduce entropy regularization to convert it into a more stable alternative optimization procedure. Based on this, we propose a novel GDA training framework and provide theoretical analysis in terms of stability and generalization. Finally, extensive experiments are conducted to demonstrate the efficacy of the E-SUOT framework.
Successful Page Load