MAC-NeRF: Motion-Aware Curriculum Learning for Dynamic LiDAR NeRFs
Abstract
While LiDAR NeRFs excel in static environments, synthesizing dynamic scenes remains challenging as moving objects break multi-view consistency, causing conflicting supervision and ghosting artifacts across frames. Existing methods typically suffer from optimization difficulty from the start, struggling to disentangle valid geometry from motion noise when initial motion priors are unreliable. To address this, we propose MAC-NeRF, a novel LiDAR NeRF framework enhanced by motion-aware curriculum learning for high-fidelity dynamic scene synthesis. First, we propose Rectified Temporal Consistency to resolve temporally conflicting supervision. By filtering out erroneous supervision via forward-backward geometric verification, it creates a curriculum that prioritizes trustworthy temporal correspondences before handling challenging motions. Second, we propose Confidence-Modulated Frequency Regularization (CMFR) to eliminate geometric ambiguity. It adaptively modulates the frequency regularization bandwidth, progressively transitioning from strict low-frequency constraints for artifact suppression to full-spectrum modeling for fine-grained detail preservation. Extensive experiments on KITTI-360 and nuScenes demonstrate that MAC-NeRF significantly outperforms state-of-the-art methods in rendering quality. Our code will be available upon acceptance.