Ripple Perturbations Through Structure: Likelihood-Constrained Adversarial Attacks on Heterogeneous Tabular Data
Abstract
Generating realistic adversarial examples for tabular data remains challenging due to heterogeneous feature types and asymmetric inter-feature dependencies. Existing approaches typically rely on handcrafted constraints or undirected similarity criteria to delimit the feasible attack region, which often fail to capture the structural dependency governing tabular generation. Consequently, standard attacks typically produce perturbations that are statistically likely yet semantically inconsistent and prone to optimization stagnation via gradient masking. To address this, we propose LCSA, a white-box framework that formulates adversarial generation as optimization over structurally admissible perturbations. LCSA leverages an ensemble of heterogeneous neural Structural Causal Models to infer dependencies and introduces a structure-aware ripple mechanism. Unlike attacks that perturb features in isolation, this mechanism propagates updates downstream, acting as a structural preconditioner that conditions gradient flow to mitigate masking effects. Extensive experiments demonstrate that LCSA outperforms state-of-the-art baselines in 45 of 50 evaluated configurations, yielding adversarial examples with superior structural consistency and transferability.