Timezone: »
Pruning methods can considerably reduce the size of artificial neural networks without harming their performance and in some cases they can even uncover sub-networks that, when trained in isolation, match or surpass the test accuracy of their dense counterparts. Here, we characterize the inductive bias that pruning imprints in such "winning lottery tickets": focusing on visual tasks, we analyze the architecture resulting from iterative magnitude pruning of a simple fully connected network. We show that the surviving node connectivity is local in input space, and organized in patterns reminiscent of the ones found in convolutional networks. We investigate the role played by data and tasks in shaping the architecture of the pruned sub-network. We find that pruning performances, and the ability to sift out the noise and make local features emerge, improve by increasing the size of the training set, and the semantic value of the data. We also study different pruning procedures, and find that iterative magnitude pruning is particularly effective in distilling meaningful connectivity out of features present in the original task. Our results suggest the possibility to automatically discover new and efficient architectural inductive biases in other datasets and tasks.
Author Information
Franco Pellegrini (École normale supérieure, Paris)
Giulio Biroli (ENS)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Neural Network Pruning Denoises the Features and Makes Local Connectivity Emerge in Visual Tasks »
Tue. Jul 19th 08:45 -- 08:50 PM Room Room 310
More from the Same Authors
-
2021 : On the interplay between data structure and loss function: an analytical study of generalization for classification »
Stéphane d'Ascoli · Marylou Gabrié · Levent Sagun · Giulio Biroli -
2021 Poster: ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases »
Stéphane d'Ascoli · Hugo Touvron · Matthew Leavitt · Ari Morcos · Giulio Biroli · Levent Sagun -
2021 Spotlight: ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases »
Stéphane d'Ascoli · Hugo Touvron · Matthew Leavitt · Ari Morcos · Giulio Biroli · Levent Sagun -
2020 Poster: Double Trouble in Double Descent: Bias and Variance(s) in the Lazy Regime »
Stéphane d'Ascoli · Maria Refinetti · Giulio Biroli · Florent Krzakala