Timezone: »

 
Poster
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks
Quynh Nguyen · Marco Mondelli · Guido Montufar

Wed Jul 21 09:00 AM -- 11:00 AM (PDT) @ None #None
A recent line of work has analyzed the theoretical properties of deep neural networks via the Neural Tangent Kernel (NTK). In particular, the smallest eigenvalue of the NTK has been related to the memorization capacity, the global convergence of gradient descent algorithms and the generalization of deep nets. However, existing results either provide bounds in the two-layer setting or assume that the spectrum of the NTK matrices is bounded away from 0 for multi-layer networks. In this paper, we provide tight bounds on the smallest eigenvalue of NTK matrices for deep ReLU nets, both in the limiting case of infinite widths and for finite widths. In the finite-width setting, the network architectures we consider are fairly general: we require the existence of a wide layer with roughly order of $N$ neurons, $N$ being the number of data samples; and the scaling of the remaining layer widths is arbitrary (up to logarithmic factors). To obtain our results, we analyze various quantities of independent interest: we give lower bounds on the smallest singular value of hidden feature matrices, and upper bounds on the Lipschitz constant of input-output feature maps.

Author Information

Quynh Nguyen (MPI-MIS)
Marco Mondelli (IST Austria)

Marco Mondelli received the B.S. and M.S. degree in Telecommunications Engineering from the University of Pisa, Italy, in 2010 and 2012, respectively. In 2016, he obtained his Ph.D. degree in Computer and Communication Sciences at the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. He is currently an Assistant Professor at the Institute of Science and Technology Austria (IST Austria). Prior to that, he was a Postdoctoral Scholar in the Department of Electrical Engineering at Stanford University, CA, USA, from February 2017 to August 2019. He was also a Research Fellow with the Simons Institute for the Theory of Computing, UC Berkeley, CA, USA, for the program on Foundations of Data Science from August to December 2018. His research interests include data science, machine learning, information theory, wireless communication systems, and modern coding theory. He was the recipient of a number of fellowships and awards, including the Jack K. Wolf ISIT Student Paper Award in 2015, the STOC Best Paper Award in 2016, the EPFL Doctorate Award in 2018, the Simons-Berkeley Research Fellowship in 2018, and the Lopez-Loreta Prize in 2019.

Guido Montufar (UCLA)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors