Skip to yearly menu bar Skip to main content


Poster

Unraveling the Impact of Heterophilic Structures on Graph Positive-Unlabeled Learning

Yuhao Wu · Jiangchao Yao · Bo Han · Lina Yao · Tongliang Liu

Hall C 4-9 #2410
[ ] [ Paper PDF ]
[ Poster
Thu 25 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract: While Positive-Unlabeled (PU) learning is vital in many real-world scenarios, its application to graph data still remains under-explored. We unveil that a critical challenge for PU learning on graph lies on the edge heterophily, which directly violates the $\textit{irreducibility assumption}$ for $\textit{Class-Prior Estimation}$ (class prior is essential for building PU learning algorithms) and degenerates the latent label inference on unlabeled nodes during classifier training. In response to this challenge, we introduce a new method, named $\textit{$\underline{G}$raph $\underline{P}$U Learning with $\underline{L}$abel Propagation Loss}$ (GPL). Specifically, GPL considers learning from PU nodes along with an intermediate heterophily reduction, which helps mitigate the negative impact of the heterophilic structure. We formulate this procedure as a bilevel optimization that reduces heterophily in the inner loop and efficiently learns a classifier in the outer loop. Extensive experiments across a variety of datasets have shown that GPL significantly outperforms baseline methods, confirming its effectiveness and superiority.

Chat is not available.