Causal-aware Anomaly Detection for Tabular Data
Abstract
Anomaly detection (AD) methods often ignore causal dependencies and treat anomalies as outliers, which is brittle when anomalies are primarily mechanism violations rather than extreme values. We propose CausalAno, a causal-aware detector that trains a causal GAN on normal data and leverages its discriminator to learn mechanism-consistent representations. We score test samples by fitting a Gaussian model in this feature space and computing the Mahalanobis distance, measuring deviation from the normal causal manifold. We demonstrate the effectiveness of CausalAno with extensive experiments on 28 tabular datasets (18 continuous-only and 10 mixed-type), comparing against 16 SOTA baselines. Our results show consistent improvements across both mixed-type and numerical-only settings. Our ablation studies further confirm that the gains come from the causal factorization in the causal GAN rather than a generic GAN backbone. Our CausalAno offers a practical and effective solution for real-world applications where anomalies often arise from dependency-breaking behaviors.