Bias-Spectrum Neural Processes for Parametric PDEs: Architecture Priors Meet PDE Constraints
Abstract
Parametric partial differential equations (PDEs) serve as fundamental models across science and engineering, yet constructing fast and accurate surrogate models from sparse, irregularly sampled observations with reliable uncertainty quantification remains challenging. Existing approaches struggle to simultaneously handle variable observation patterns, preserve physics consistency, and provide well-calibrated predictive uncertainty. We introduce Bias-Spectrum Neural Processes (BSNP), a unified meta-learning framework that systematically integrates weak structural priors (translation equivariance, locality) with strong physical priors (governing equations and boundary conditions). BSNP addresses two critical obstacles: discretization overfitting through stochastic collocation that resamples residual evaluation points, and uncertainty collapse through mean-field enforcement that applies PDE constraints only to predictive means while preserving learned uncertainty. Comprehensive experiments on nonlinear Poisson equations, Burgers dynamics, and Navier-Stokes flows demonstrate that BSNP achieves superior accuracy and well-calibrated uncertainty quantification in sparse-data regimes.