Timezone: »

Orthogonality-Promoting Distance Metric Learning: Convex Relaxation and Theoretical Analysis
Pengtao Xie · Wei Wu · Yichen Zhu · Eric Xing

Fri Jul 13 08:00 AM -- 08:20 AM (PDT) @ A9

Distance metric learning (DML), which learns a distance metric from labeled "similar" and "dissimilar" data pairs, is widely utilized. Recently, several works investigate orthogonality-promoting regularization (OPR), which encourages the projection vectors in DML to be close to being orthogonal, to achieve three effects: (1) high balancedness -- achieving comparable performance on both frequent and infrequent classes; (2) high compactness -- using a small number of projection vectors to achieve a "good" metric; (3) good generalizability -- alleviating overfitting to training data. While showing promising results, these approaches suffer three problems. First, they involve solving non-convex optimization problems where achieving the global optimal is NP-hard. Second, it lacks a theoretical understanding why OPR can lead to balancedness. Third, the current generalization error analysis of OPR is not directly on the regularizer. In this paper, we address these three issues by (1) seeking convex relaxations of the original nonconvex problems so that the global optimal is guaranteed to be achievable; (2) providing a formal analysis on OPR's capability of promoting balancedness; (3) providing a theoretical analysis that directly reveals the relationship between OPR and generalization performance. Experiments on various datasets demonstrate that our convex methods are more effective in promoting balancedness, compactness, and generalization, and are computationally more efficient, compared with the nonconvex methods.

Author Information

Pengtao Xie (Carnegie Mellon University)
Wei Wu (Carnegie Mellon University)
Yichen Zhu (Peking University)
Eric Xing (Petuum Inc. and CMU)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors