Timezone: »
Unsupervised multiplex graph representation learning (UMGRL) has received increasing interest, but few works simultaneously focused on the common and private information extraction. In this paper, we argue that it is essential for conducting effective and robust UMGRL to extract complete and clean common information, as well as more-complementarity and less-noise private information. To achieve this, we first investigate disentangled representation learning for the multiplex graph to capture complete and clean common information, as well as design a contrastive constraint to preserve the complementarity and remove the noise in the private information. Moreover, we theoretically analyze that the common and private representations learned by our method are provably disentangled and contain more task-relevant and less task-irrelevant information to benefit downstream tasks. Extensive experiments verify the superiority of the proposed method in terms of different downstream tasks.
Author Information
Yujie Mo (University of Electronic Science and Technology of China)
Yajie Lei (University of Electronic Science and Technology of China)
Jialie SHEN (City, University of London)
Xiaoshuang Shi (University of Electronic Science and Technology of China)
Heng Tao Shen (University of Electronic Science and Technology of China)
Xiaofeng Zhu (University of Electronic Science and Technology of China)
More from the Same Authors
-
2023 : Exposing the Fake: Effective Diffusion-Generated Images Detection »
RuiPeng Ma · Jinhao Duan · Fei Kong · Xiaoshuang Shi · Kaidi Xu -
2023 Poster: A Universal Unbiased Method for Classification from Aggregate Observations »
Zixi Wei · Lei Feng · Bo Han · Tongliang Liu · Gang Niu · Xiaofeng Zhu · Heng Tao Shen -
2023 Poster: Are Diffusion Models Vulnerable to Membership Inference Attacks? »
Jinhao Duan · Fei Kong · Shiqi Wang · Xiaoshuang Shi · Kaidi Xu