Counterfactual Occlusion-Aware Learning via Visibility Intervention for LiDAR Anomaly Detection
Abstract
LiDAR point cloud anomaly detection is critical for autonomous system safety, yet most existing methods rely only on visible measurements, overlooking occlusion as a structured consequence of the LiDAR sensing process. We argue that anomalies are characterized not only by what is observed, but also by the spatial voids they create, which alter occlusion patterns and volumetric visibility. We propose Counterfactual Occlusion-Visibility Anomaly Learning (COVAL), a framework that intervenes on volumetric visibility during training. Using physics-conformed synthetic anomaly construction, COVAL generates paired factual and counterfactual observations with identical scene geometry but different occlusion patterns. Then, we introduce two complementary objectives: Visibility-Variant Counterfactual Reconstruction, which models occlusion-induced missing regions, and Visibility-Invariant Counterfactual Consistency, which enforces stable representations across visibility changes. Together, these objectives isolate anomaly-induced structural missingness and in turn refine representation of normal scenes, thus improving anomaly sensitivity at test time. Experiments on standard LiDAR anomaly segmentation benchmarks show that COVAL achieves state-of-the-art performance.