Tuning-Free One-Class Discriminant Learning for Tabular Anomaly Detection
Abstract
Anomaly detection (AD) on real-world tabular data is challenged by scarce labels, diverse anomaly types, and high sensitivity to data-specific hyperparameter tuning. We propose Discriminant Vector Machine for Anomaly detection (DVM-AD), a robust one-class method derived from discriminant analysis without data-specific tuning. DVM-AD makes one-class discriminant learning effective via a deterministic reference point, then solves a generalized eigenproblem using a Moore–Penrose pseudo-inverse to remain stable under high-dimensional or rank-deficient settings. Moreover, it selects discriminant directions from both extremes of a bounded inverse-scatter spectrum, combining directions to tighten the normal class with structure-preserving directions that retain informative geometry, addressing the compactness–structure tradeoff. At inference, DVM-AD samples scores by nearest-neighbor distance in the learned space with a training-geometry normalization to yield bounded and threshold-ready scores. Across 47 ADBench tabular datasets and 10 NLP/CV embedding benchmarks against 28 baselines, DVM-AD achieves the best average AUROC (89.65\%, average rank 2.98) on tabular datasets and remains top-ranked across four anomaly types and on embedding tasks (average rank 1.60, AUROC 72.68\%).