Forward-Chaining Temporal Point Process
Abstract
Event sequences from complex systems, such as clinical workflows, are often sparse and incomplete. As a result, downstream models are trained on data that only partially captures the underlying dynamics. Synthetic sequence generation can augment real data by filling in missing structure and improving coverage of rare patterns, but generated trajectories must remain realistic, satisfy domain constraints, and allow control. We propose the Forward-Chaining Temporal Point Process (FC-TPP), a framework for constraint-aware and controllable sequence generation in continuous time. FC-TPP maintains an explicit latent symbolic state encoding high-level predicates, which evolves through a differentiable multi-hop forward-chaining operator. Logical rules update the latent state based on recent events, while a temporal point process decoder generates future event times and types conditioned on this evolving state. By tying the generative dynamics to multi-hop reasoning in latent space, FC-TPP incorporates symbolic structure throughout generation rather than relying directly on raw event histories. Experiments on synthetic data and four semi-synthetic/real-world benchmarks—LogiCity, MIMIC-IV, EPIC-100, and IKEA ASM—show that FC-TPP achieves higher generation quality under limited and incomplete data, with stronger constraint adherence and greater controllability than purely neural and prior neuro-symbolic baselines.