Scalable Training of 3D Gaussian Splatting via Out-of-Core Optimization
Chonghao Zhong ⋅ Shi Linfeng ⋅ ChenHua ⋅ Tiecheng Sun ⋅ Hao Zhao ⋅ Binhang Yuan ⋅ Chaojian Li
Abstract
Training 3D Gaussian Splatting (3DGS) at billion-primitive scale is fundamentally memory-bound: each Gaussian carries a large attribute vector, and the aggregate parameter table quickly exceeds GPU capacity, limiting prior systems to tens of millions of Gaussians on consumer hardware. We observe that 3DGS training is inherently sparse and trajectory-conditioned: each iteration activates only the Gaussians visible from the current camera batch, so GPU memory can serve as a working-set cache rather than a persistent parameter store. Building on this insight, we introduce \textbf{TideGS}, an out-of-core training framework that manages parameters across an SSD--CPU--GPU hierarchy via three synergistic techniques: block-virtualized geometry for SSD-aligned spatial locality, a hierarchical asynchronous pipeline to overlap I/O with computation, and trajectory-adaptive differential streaming that transfers only incremental working-set deltas between iterations. Experiments show that TideGS enables training with \textbf{over one billion Gaussians} on a single consumer GPU while achieving state-of-the-art reconstruction quality on large-scale scenes, exceeding prior out-of-core baselines (e.g., $\sim$100M Gaussians) and standard in-memory training (e.g., $\sim$11M Gaussians).
Successful Page Load