Relative Entropy Estimation in Function Space: Theory and Applications to Trajectory Inference
Abstract
Trajectory Inference (TI) seeks to reconstruct latent dynamical processes from snapshot data, which consist of independent samples from time-indexed marginals of an underlying stochastic system. In applications such as single-cell genomics, destructive measurements preclude direct observation of trajectories, making the induced distribution over paths fundamentally ill-posed given finitely many marginals. However, despite extensive work on modeling approaches, little attention has been paid to evaluating the inferred object itself, namely, a probability measure over trajectories. Since path-space laws are not identifiable from snapshot data, evaluation protocols based on predictive accuracy at held-out marginals provide only limited information and fail to constrain trajectory-level behavior. We introduce a general framework for estimating the Kullback–Leibler divergence (KL) between probability measures on function space: we obtain a tractable estimator that can be approximated from data, is practical, and scales to realistic problem sizes (number and size of snapshot data). We apply this framework to a systematic empirical study of trajectory inference methods on synthetic and real datasets. We show that current evaluation metrics yield inconsistent assessments, whereas path-space KL provides a coherent comparison that reveals discrepancies in inferred dynamics, particularly in regions with sparse or missing data. These results support the use of functional KL as a principled criterion for evaluating TI methods under partial observability.