Skip to yearly menu bar Skip to main content


Poster

Rethinking Independent Cross-Entropy Loss For Graph-Structured Data

Rui Miao · Kaixiong Zhou · Yili Wang · Ninghao Liu · Ying Wang · Xin Wang

Hall C 4-9 #2900
[ ] [ Project Page ] [ Paper PDF ]
[ Poster
Tue 23 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract:

Graph neural networks (GNNs) have exhibited prominent performance in learning graph-structured data. Considering node classification task, based on the i.i.d assumption among node labels, the traditional supervised learning simply sums up cross-entropy losses of the independent training nodes and applies the average loss to optimize GNNs' weights. But different from other data formats, the nodes are naturally connected. It is found that the independent distribution modeling of node labels restricts GNNs' capability to generalize over the entire graph and defend adversarial attacks. In this work, we propose a new framework, termed joint-cluster supervised learning, to model the joint distribution of each node with its corresponding cluster. We learn the joint distribution of node and cluster labels conditioned on their representations, and train GNNs with the obtained joint loss. In this way, the data-label reference signals extracted from the local cluster explicitly strengthen the discrimination ability on the target node. The extensive experiments demonstrate that our joint-cluster supervised learning can effectively bolster GNNs' node classification accuracy. Furthermore, being benefited from the reference signals which may be free from spiteful interference, our learning paradigm significantly protects the node classification from being affected by the adversarial attack.

Chat is not available.