Skip to yearly menu bar Skip to main content


Poster

Rethinking Independent Cross-Entropy Loss For Graph-Structured Data

Rui Miao · Kaixiong Zhou · Yili Wang · Ninghao Liu · Ying Wang · Xin Wang


Abstract:

Graph neural networks (GNNs) have exhibited prominent performance in learning graph-structured data. Considering node classification task, the individual label distribution conditioned on node representation is used to predict its classes. Based on the i.i.d assumption among node labels, the traditional supervised learning simply sums up cross-entropy losses of the independent training nodes and applies the average loss to optimize GNNs' weights. But different from other data formats, the nodes are naturally connected and their classes are correlated to neighbors at the same cluster. It is found that the independent distribution modeling of node labels restricts GNNs' capability to generalize over the entire graph and defend adversarial attacks. In this work, we propose a new framework, termed joint-cluster supervised learning, to model the joint distribution of each node with its corresponding cluster. Rather than assuming the node labels are independent, we learn the joint distribution of node and cluster labels conditioned on their representations, and train GNNs with the obtained joint loss. In this way, the data-label reference signals extracted from the local cluster explicitly strengthen the discrimination ability on the target node. The extensive experiments on 12 benchmark datasets and 7 backbone models demonstrate that our joint-cluster supervised learning can effectively bolster GNNs' node classification accuracy. Furthermore, being benefited from the reference signals which may be free from spiteful interference, our learning paradigm significantly protects the node classification from being affected by the adversarial attack.

Live content is unavailable. Log in and register to view live content