Poster
VTGaussian-SLAM: RGBD SLAM for Large Scale Scenes with Splatting View-Tied 3D Gaussians
Pengchong Hu · Zhizhong Han
West Exhibition Hall B2-B3 #W-203
Jointly estimating camera poses and mapping scenes from RGBD images is a fundamental task in simultaneous localization and mapping (SLAM). State-of-the-art methods employ 3D Gaussians to represent a scene, and render these Gaussians through splatting for higher efficiency and better rendering. However, these methods cannot scale up to extremely large scenes, due to the inefficient tracking and mapping strategies that need to optimize all 3D Gaussians in the limited GPU memories throughout the training to maintain the geometry and color consistency to previous RGBD observations. To resolve this issue, we propose novel tracking and mapping strategies to work with a novel 3D representation, dubbed view-tied 3D Gaussians, for RGBD SLAM systems. View-tied 3D Gaussians is a kind of simplified Gaussians, which is tied to depth pixels, without needing to learn locations, rotations, and multi-dimensional variances. Tying Gaussians to views not only significantly saves storage but also allows us to employ many more Gaussians to represent local details in the limited GPU memory. Moreover, our strategies remove the need of maintaining all Gaussians learnable throughout the training, while improving rendering quality, and tracking accuracy. We justify the effectiveness of these designs, and report better performance over the latest methods on the widely used benchmarks in terms of rendering and tracking accuracy and scalability. Please see our project page for code and videos at https://machineperceptionlab.github.io/VTGaussian-SLAM-Project.
Jointly estimating camera poses and mapping scenes from RGBD images is a fundamental task in simultaneous localization and mapping (SLAM). Recent state-of-the-art SLAM methods employ 3D Gaussians to represent a scene, but they face scalability challenges in large scenes. This is primarily due to the need to optimize all Gaussians in the limited GPU memories throughout training to maintain the geometry and color consistency to previous RGBD observations.We introduce novel tracking and mapping strategies to work with a novel 3D representation, which we term view-tied 3D Gaussians, for SLAM systems. Unlike recent approaches that learn the position and shape of each 3D Gaussian, our method ties these Gaussians directly to depth pixels from the camera. This design greatly reduces memory usage and enables the use of many more Gaussians to capture fine scene details.Our method significantly enhances rendering quality, tracking accuracy, and scalability. It achieves better performance than state-of-the-art systems on widely used benchmarks. Please see our project page for code and videos at https://machineperceptionlab.github.io/VTGaussian-SLAM-Project.
Live content is unavailable. Log in and register to view live content