Taming the Loss Landscape of PINNs with Noisy Feynman–Kac Supervision: Operator Preconditioning and Non-Asymptotic Error Bounds
Nathanael Tepakbong ⋅ Hanyu HU ⋅ Chengyu Liu ⋅ Xiang ZHOU
Abstract
Physics-Informed Neural Networks (PINNs) often train slowly or fail to converge on challenging partial differential equations (PDEs), a behavior recently linked to severely ill-conditioned loss landscapes inherited from the underlying differential operator. We propose FK-PINNs, a simple modification of the PINN objective that provably improves this conditioning: at a few points in the domain we compute Feynman--Kac estimates of the solution by Monte Carlo averaging, and add the resulting data-fidelity term to the standard residual and boundary losses. For a broad class of linear second-order PDEs admitting a Feynman--Kac representation, we show that this term acts as an operator-level preconditioner: for suitable weights, our comparison bounds guarantee a substantially smaller condition number than under the standard PINN loss, even for modest Monte Carlo sample budgets. Leveraging learning-theoretic tools, we derive non-asymptotic $L^2(\Omega)$-error bounds for the FK-PINNs with $\tanh$ activation by decomposing the excess risk into approximation, statistical, and optimization error terms and tightly controlling the Monte Carlo error tails. Along the way, we establish pseudo-dimension bounds for first- and second-order derivatives of $\tanh$ neural networks, which are of independent interest and, to the best of our knowledge, new. Numerical experiments on Poisson, Schrödinger, mean exit time, and committor problems corroborate the theory, and show that FK-PINNs can successfully solve PDEs for which vanilla PINNs exhibit severe failure modes.
Successful Page Load