Skip to yearly menu bar Skip to main content


Spotlight

Representation Topology Divergence: A Method for Comparing Neural Network Representations.

Serguei Barannikov · Ilya Trofimov · Nikita Balabin · Evgeny Burnaev

Ballroom 1 & 2
[ ] [ Livestream: Visit Deep Learning ]

Abstract:

Comparison of data representations is a complex multi-aspect problem. We propose a method for comparing two data representations. We introduce the Representation Topology Divergence (RTD) score measuring the dissimilarity in multi-scale topology between two point clouds of equal size with a one-to-one correspondence between points. The two data point clouds can lie in different ambient spaces. The RTD score is one of the few topological data analysis based practical methods applicable to real machine learning datasets. Experiments show the agreement of RTD with the intuitive assessment of data representation similarity. The proposed RTD score is sensitive to the data representation's fine topological structure. We use the RTD score to gain insights on neural networks representations in computer vision and NLP domains for various problems: training dynamics analysis, data distribution shift, transfer learning, ensemble learning, disentanglement assessment.

Chat is not available.