Poster
in
Workshop: Hardware-aware efficient training (HAET)

Studying the impact of magnitude pruning on contrastive learning methods

Francesco Corti · Rahim Entezari · Sara Hooker · Davide Bacciu · Olga Saukh


Abstract:

We study the impact of different versions of magnitude pruning on the representation learned by deep models trained with supervised and supervised contrastive learning methods. We discover that at high sparsity contrastive learning results in a higher number of misclassified examples than if the models are trained with supervised learning. We use the number of PIEs (Hooker et al., 2019), Q Score (Kalibhat et al., 2022), and PD- Score (Baldock et al., 2021) metrics to understand the impact of pruning on the learned representation quality. Our analysis suggests that popular pruning methods are oblivious to representation learning: misclassified examples are largely unique for a combination of learning and pruning methods. The negative impact of sparsity on the quality of the learned representation is the highest early on in the training phase.

Chat is not available.