Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML)

Caveats of neural persistence in deep neural networks

Leander Girrbach · Anders Christensen · A. Koepke · Ole Winther · Zeynep Akata


Abstract:

Neural Persistence is a prominent measure for quantifying neural network complexity, proposed in the emerging field of topological data analysis in deep learning. In this work, however, we find both theoretically and empirically that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence. First, we prove tighter bounds on neural persistence that motivate this claim theoretically. Then, we confirm that our interpretation holds in practise by calculating neural persistence for synthetic weight matrices and for trained deep neural networks. This raises the question if the benefits of neural persistence can be achieved by simpler means, since already calculating 0-order persistent homology for large matrices is costly.

Chat is not available.