Skip to yearly menu bar Skip to main content


Poster

Gradient Descent Finds the Global Optima of Two-Layer Physics-Informed Neural Networks

Yihang Gao · Yiqi Gu · Michael Ng

Exhibit Hall 1 #512
[ ]
[ PDF [ Poster

Abstract:

The main aim of this paper is to conduct the convergence analysis of the gradient descent for two-layer physics-informed neural networks (PINNs). Here, the loss function involves derivatives of neural network outputs with respect to its inputs, so the interaction between the trainable parameters is more complicated compared with simple regression and classification tasks. We first develop the positive definiteness of Gram matrices and prove that the gradient flow finds the global optima of the empirical loss under over-parameterization. Then, we demonstrate that the standard gradient descent converges to the global optima of the loss with proper choices of learning rates. The framework of our analysis works for various categories of PDEs (e.g., linear second-order PDEs) and common types of network initialization (LecunUniform etc.). Our theoretical results do not need a very strict hypothesis for training samples and have a looser requirement on the network width compared with some previous works.

Chat is not available.