Heavy-tailed Physics-Informed Neural Networks
Jephte Abijuru ⋅ Mayank Kumar Nagda ⋅ Phil Sidney Ostheimer ⋅ Jan Tauberschmidt ⋅ Sebastian Vollmer ⋅ Stephan Mandt ⋅ Marius Kloft ⋅ Sophie Fellenz
Abstract
Physics-informed neural networks (PINNs) enforce physical laws by minimizing partial differential equation (PDE) residuals and auxiliary constraints. Standard training relies on a mean-squared error (MSE) objective, which implicitly assumes independent Gaussian residuals with a fixed global variance. We show theoretically and empirically that residuals encountered during PINN training are heterogeneous and heavy-tailed, revealing a systematic mismatch with this assumption. As a consequence, a small number of large residuals can disproportionately dominate both the loss and gradient, leading to poorly balanced optimization dynamics. Motivated by this mismatch, we adopt a Student-$t$ residual model to explicitly capture heavy-tailed behavior. An equivalent hierarchical representation yields an expectation–maximization (EM) algorithm that alternates between estimating residual-dependent weights and optimizing network parameters via a weighted MSE objective, allowing existing PINN solvers to be reused in the M-step. The resulting training dynamics bound the influence of extreme residuals and admit almost sure convergence guarantees under standard stochastic optimization assumptions. Experiments across a diverse suite of challenging PDE benchmarks demonstrate consistently improved solution accuracy and robustness compared to standard PINN training.
Successful Page Load