Denoising without Diffusion: Fixed-Noise Denoiser Anomaly Detection in Tabular Data
Abstract
While diffusion models have advanced anomaly detection, their reliance on multi-step noise schedules introduces significant computational complexity. In this paper, we demonstrate that the generative capability of diffusion is not required for tabular one-class anomaly detection. We revisit core principles of denoising without targeting data generation and present a deep-learning approach that streamlines these objectives into a fixed-noise formulation. Unlike standard denoising autoencoders that rely on reconstruction error, our method utilizes a preconditioning with an explicit linear reference channel. We train a denoising predictor to recover clean samples from perturbed observations and derive anomaly scores from the expected deviation under repeated perturbations. We theoretically motivate this with a stability proxy by a first-order approximation rather than just distance to the manifold. On the well-established ADBench benchmark, our approach achieves improvements over existing methods of 1.22\% in AUCROC and 1.13\% in AUCPR, the most informative and threshold-independent metrics. Our approach emphasizes structural simplicity and efficiency, indicating that a single-step, stability-based objective outperforms complex generative schedules for tabular data.