## Dynamics of Deep Neural Networks and Neural Tangent Hierarchy

### Jiaoyang Huang · Horng-Tzer Yau

Keywords: [ Deep Learning Theory ] [ Kernel Methods ] [ Non-convex Optimization ] [ Optimization ] [ Deep Learning - Theory ]

[ Abstract ] [ Join Zoom
Abstract: The evolution of a deep neural network trained by the gradient descent in the overparametrization regime can be described by its neural tangent kernel (NTK) \cite{jacot2018neural, du2018gradient1,du2018gradient2,arora2019fine}. It was observed \cite{arora2019exact} that there is a performance gap between the kernel regression using the limiting NTK and the deep neural networks. We study the dynamic of neural networks of finite width and derive an infinite hierarchy of differential equations, the neural tangent hierarchy (NTH). We prove that the NTH hierarchy truncated at the level $p\geq 2$ approximates the dynamic of the NTK up to arbitrary precision under certain conditions on the neural network width and the data set dimension. The assumptions needed for these approximations become weaker as $p$ increases. Finally, NTH can be viewed as higher order extensions of NTK. In particular, the NTH truncated at $p=2$ recovers the NTK dynamics.