Finding trainable sparse networks through Neural Tangent Transfer

Tianlin Liu, Friedemann Zenke,

Abstract Paper

Tue Jul 14 2 p.m. PDT [iCal] [ Join Zoom ]
Wed Jul 15 1 a.m. PDT [iCal] [ Join Zoom ]
Please do not share or post zoom links

Abstract:

Deep neural networks have dramatically transformed machine learning, but their memory and energy demands are substantial. The requirements of real biological neural networks are rather modest in comparison, and one feature that might underlie this austerity is their sparse connectivity. In deep learning, trainable sparse networks that perform well on a specific task are usually constructed using label-dependent pruning criteria. In this article, we introduce Neural Tangent Transfer, a method that instead finds trainable sparse networks in a label-free manner. Specifically, we find sparse networks whose training dynamics, as characterized by the neural tangent kernel, mimic those of dense networks in function space. Finally, we evaluate our label-agnostic approach on several standard classification tasks and show that the resulting sparse networks achieve higher classification performance while converging faster.

Chat is not available.