FlowCloud: Learning Continuous Spatiotemporal Dynamics from Unpaired Sparse Point Cloud Snapshots
Yinbo Liu ⋅ Keyang Ye ⋅ Wenshan Sun ⋅ Handi Gao ⋅ Tian Tian
Abstract
Reconstructing unified continuous dynamics from sparse, non-contiguous, and unpaired point cloud snapshots remains a fundamental challenge in spatiotemporal analysis for computer vision and developmental biology. Existing methods, including scene flow and Optimal Transport-based approaches, are limited either by explicit reliance on point-wise correspondences or by cumulative errors from frame-to-frame propagation and temporal discontinuity, and limited ability to model multi-attribute dynamics such as gene expression and population changes. We propose FlowCloud, a variational Neural Ordinary Differential Equation (Neural ODE) generative framework. FlowCloud aggregates information from all observed time points into a joint latent representation that initializes a Neural ODE $z(t)$ enabling continuous spatiotemporal evolution modeling while mitigating propagation-induced errors and preserving temporal consistency. Training is performed without predefined correspondences using a multi-faceted objective with complementary roles: Sinkhorn distance for global distribution alignment, Chamfer distance for local geometric consistency, trajectory regularization to encourage smooth and physically plausible dynamics, and supervised losses for multi-attribute prediction. Experiments on human motion and developmental biology datasets demonstrate improved interpolation accuracy and promising short-term extrapolation performance. By unifying geometry, attributes, and population dynamics within a continuous latent framework, FlowCloud offers a novel and robust solution for continuous dynamic reconstruction from unstructured spatiotemporal observations.
Successful Page Load