Timezone: »
Studying the Consistency and Composability of Lottery Ticket Pruning Masks
Rajiv Movva · Michael Carbin · Jonathan Frankle
Magnitude pruning is a common, effective technique to identify sparse subnetworks at little cost to accuracy. In this work, we ask whether a particular architecture's accuracy-sparsity tradeoff can be improved by combining pruning information across multiple runs of training. From a shared ResNet-20 initialization, we train several network copies (\textit{siblings}) to completion using different SGD data orders on CIFAR-10. While the siblings' pruning masks are naively not much more similar than chance, starting sibling training after a few epochs of shared pretraining significantly increases pruning overlap. We then choose a subnetwork by either (1) taking all weights that survive pruning in any sibling (mask union), or (2) taking only the weights that survive pruning across all siblings (mask intersection). The resulting subnetwork is retrained. Strikingly, we find that union and intersection masks perform very similarly. Both methods match the accuracy-sparsity tradeoffs of the one-shot magnitude pruning baseline, even when we combine masks from up to $k = 10$ siblings.
Author Information
Rajiv Movva (MIT)
Michael Carbin (MIT)
Jonathan Frankle (MIT CSAIL)
More from the Same Authors
-
2021 : On the Generalization Improvement from Neural Network Pruning »
Tian Jin · Gintare Karolina Dziugaite · Michael Carbin -
2022 : Pre-Training on a Data Diet: Identifying Sufficient Examples for Early Training »
Mansheej Paul · Brett Larsen · Surya Ganguli · Jonathan Frankle · Gintare Karolina Dziugaite -
2022 : Knowledge Distillation for Efficient Sequences of Training Runs »
Xingyu Liu · Xingyu Liu · Alexander Leonardi · Alexander Leonardi · Lu Yu · Lu Yu · Christopher Gilmer-Hill · Christopher Gilmer-Hill · Matthew Leavitt · Matthew Leavitt · Jonathan Frankle · Jonathan Frankle -
2023 : Distributions for Compositionally Differentiating Parametric Discontinuities »
Jesse Michel · Kevin Mu · Xuanda Yang · Sai Praveen Bangaru · Elias Rojas Collins · Gilbert Bernstein · Jonathan Ragan-Kelley · Michael Carbin · Tzu-Mao Li -
2023 : Can LLMs Generate Random Numbers? Evaluating LLM Sampling in Controlled Domains »
Aspen Hopkins · Alex Renda · Michael Carbin -
2022 Poster: What Can Linear Interpolation of Neural Network Loss Landscapes Tell Us? »
Tiffany Vlaar · Jonathan Frankle -
2022 Spotlight: What Can Linear Interpolation of Neural Network Loss Landscapes Tell Us? »
Tiffany Vlaar · Jonathan Frankle -
2021 : On the Generalization Improvement from Neural Network Pruning »
Tian Jin · Gintare Karolina Dziugaite · Michael Carbin -
2021 Poster: On the Predictability of Pruning Across Scales »
Jonathan Rosenfeld · Jonathan Frankle · Michael Carbin · Nir Shavit -
2021 Spotlight: On the Predictability of Pruning Across Scales »
Jonathan Rosenfeld · Jonathan Frankle · Michael Carbin · Nir Shavit -
2020 : Q&A: Jonathan Frankle »
Jonathan Frankle · Mayoore Jaiswal -
2020 : Contributed Talk: Jonathan Frankle »
Jonathan Frankle -
2020 Poster: Linear Mode Connectivity and the Lottery Ticket Hypothesis »
Jonathan Frankle · Gintare Karolina Dziugaite · Daniel Roy · Michael Carbin -
2019 Poster: Ithemal: Accurate, Portable and Fast Basic Block Throughput Estimation using Deep Neural Networks »
Charith Mendis · Alex Renda · Dr.Saman Amarasinghe · Michael Carbin -
2019 Oral: Ithemal: Accurate, Portable and Fast Basic Block Throughput Estimation using Deep Neural Networks »
Charith Mendis · Alex Renda · Dr.Saman Amarasinghe · Michael Carbin