Skip to yearly menu bar Skip to main content


Poster

Superpoint Gaussian Splatting for Real-Time High-Fidelity Monocular Dynamic Scene Reconstruction

Diwen Wan · Ruijie Lu · Gang Zeng


Abstract:

Rendering novel view images in dynamic monocular scenes is a crucial yet challenging task. Current methods mainly utilize NeRF-based methods to represent the static scene and an additional time-variant MLP to model scene deformations, resulting in relatively low rendering quality as well as slow inference speed. To tackle these challenges, we propose a novel framework named Superpoint Gaussian Splatting(SP-GS). Specifically, our framework first employs explicit 3D Gaussians to reconstruct the scene and cluster Gaussians with similar properties(e.g., rotation, translation, and location) into superpoints afterward. Empowered by these superpoints, our method manages to extend 3D Gaussian splatting to dynamic scenes with only a slight increase in computational expense. Apart from achieving state-of-the-art visual quality and real-time rendering under high resolutions, the superpoint representation provides a stronger manipulation capability. Extensive experiments demonstrate the practicality and effectiveness of our approach on both synthetic and real-world datasets.

Live content is unavailable. Log in and register to view live content