MapUQ: Map with Uncertainty Quantification for Robust BEV Vectorized Construction
Abstract
End-to-end online map generation is a core component of autonomous driving perception systems. However, in complex traffic scenes, Bird’s-Eye-View (BEV) with vectorized mapping suffers from limitations such as target misclassification, spatial localization drift, and ambiguous semantic segmentation. Introducing uncertainty quantification can alleviate these problems, so we propose MapUQ, a robust BEV vectorized mapping method guided by uncertainty-aware optimization. Specifically, we quantify uncertainty at the feature level to enhance semantic perception, apply an error-driven dynamic receptive field adaptation mechanism at the decoding stage to enforce geometric consistency, and leverage negative sample information at the output head to improve lane classification accuracy. Experimental results on the nuScenes and Argoverse 2 datasets show that our method outperforms prior approaches in AP across three road types, achieving an average improvement of 1.5% over the baseline with marginal computational overhead. In addition, our method surpasses the baseline on uncertainty metrics such as ECE and NLL, significantly improving robustness and mapping accuracy in complex scenarios. Our code has been released at github: https://anonymous.4open.science/r/MapUQ-D287.