Normality Calibration in Semi-supervised Graph Anomaly Detection
Guolei Zeng ⋅ Hezhe Qiao ⋅ GUOGUO AI ⋅ Jinsong Guo ⋅ Guansong Pang
Abstract
Semi-supervised graph anomaly detection (GAD), which assumes a subset of annotated normal nodes available during training, is among the most widely explored applications. However, the normality learned by existing semi-supervised GAD methods is limited to the labeled normal nodes, often inclining to overfitting the given patterns, thereby leading to high detection errors, such as high false positives. To overcome this limitation, we propose $GraphNC$, a graph normality calibration framework that leverages both labeled and unlabeled data to calibrate the normality from a teacher (a pre-trained semi-supervised GAD model) jointly in anomaly score and representation spaces. GraphNC includes two main components, anomaly score distribution alignment ($ScoreDA$) and perturbation-based normality regularization ($NormReg$). ScoreDA optimizes the anomaly scores of our model by aligning them with the score distribution yielded by the teacher. Due to accurate scores in most of the normal nodes and part of the anomaly nodes in the teacher, the alignment effectively pulls the anomaly scores of the two classes toward the two ends, resulting in more separable anomaly scores. To mitigate the misleading by inaccurate scores from the teacher, NormReg is designed to regularize the normality in representation space, making the representations of normal nodes more compact by minimizing a perturbation-guided consistency loss solely on the labeled nodes. Comprehensive experiments on six benchmarks demonstrate that GraphNC (1) consistently and substantially enhances the performance of teacher models from different GAD methods (2) achieves new state-of-the-art performance.
Successful Page Load