Trust3R: Unifying Feed-Forward Pointmap Prediction and Evidential Learning for Trust-Aware 3D Reconstruction
Abstract
Geometric foundation models hold promise for unconstrained dense geometry prediction from uncalibrated images; however, in current feed-forward designs, their predicted confidence scores are heuristic, lack probabilistic interpretation, and often fail to indicate where and how much the predicted geometry can be trusted. To fill this gap, we present Trust3R, a trust-aware 3D reconstruction framework that pairs a lightweight gated residual mean refinement with evidential learning to predict pointmap evidence under a Normal-Inverse-Wishart prior and yield a closed-form multivariate Student-t predictive distribution. This design provides probabilistically grounded pointmap uncertainty estimates while adding moderate inference overhead. We evaluate on diverse indoor and outdoor benchmarks, and compare against MASt3R's built-in confidence map as well as common uncertainty-aware baselines spanning single-pass heteroscedastic regression and sampling-based methods such as MC dropout and deep ensembles. Experimental results show that Trust3R consistently improves risk--coverage and sparsification, and generally improves geometric accuracy. Trust3R consistently improves uncertainty ranking across benchmarks (e.g. on ScanNet++: 25\% lower AURC and 41\% lower AUSE), enabling uncertainty-aware weighting for downstream alignment and fusion.