Di-BiLPS: Denoising induced Bidirectional Latent-PDE-Solver under Sparse Observations
Abstract
Partial differential equations (PDEs) are fundamental for modeling complex natural and physical phenomena. In many real-world applications, however, observational data are \textbf{extremely sparse}, which severely limits the applicability of both classical numerical solvers and existing neural approaches. While neural methods have shown promising results under moderately sparse observations, their inference efficiency at high resolutions is limited, and their accuracy degrades substantially in the extremely sparse regime. In this work, we propose the \textbf{Di-BiLPS}, a unified neural framework that effectively handle \textbf{both forward and inverse} PDE problems under extremely sparse observations. Di-BiLPS combines a variational autoencoder to compress high-dimensional inputs into a compact latent space, a latent diffusion module to model uncertainty, and contrastive learning to align representations. Operating entirely in this latent space, the framework achieves efficient inference while retaining flexible input–output mapping. In addition, we introduce a \textbf{PDE-informed denoising algorithm} based on a variance-preserving diffusion process, which further improves inference efficiency. Extensive experiments on multiple PDE benchmarks demonstrate that Di-BiLPS consistently achieves \textbf{SOTA performance under extremely sparse inputs (as low as 3\%)}, while substantially reducing computational cost. Moreover, Di-BiLPS enables \textbf{zero-shot super-resolution}, as it allows predictions over continuous spatial–temporal domains.