Timezone: »

 
Spotlight
Representation Subspace Distance for Domain Adaptation Regression
Xinyang Chen · Sinan Wang · Jianmin Wang · Mingsheng Long

Thu Jul 22 07:25 AM -- 07:30 AM (PDT) @ None

Regression, as a counterpart to classification, is a major paradigm with a wide range of applications. Domain adaptation regression extends it by generalizing a regressor from a labeled source domain to an unlabeled target domain. Existing domain adaptation regression methods have achieved positive results limited only to the shallow regime. A question arises: Why learning invariant representations in the deep regime less pronounced? A key finding of this paper is that classification is robust to feature scaling but regression is not, and aligning the distributions of deep representations will alter feature scale and impede domain adaptation regression. Based on this finding, we propose to close the domain gap through orthogonal bases of the representation spaces, which are free from feature scaling. Inspired by Riemannian geometry of Grassmann manifold, we define a geometrical distance over representation subspaces and learn deep transferable representations by minimizing it. To avoid breaking the geometrical properties of deep representations, we further introduce the bases mismatch penalization to match the ordering of orthogonal bases across representation subspaces. Our method is evaluated on three domain adaptation regression benchmarks, two of which are introduced in this paper. Our method outperforms the state-of-the-art methods significantly, forming early positive results in the deep regime.

Author Information

Xinyang Chen (Tsinghua University)
Sinan Wang (Tsinghua University)
Jianmin Wang (Tsinghua University)
Mingsheng Long (Tsinghua University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors