Timezone: »
We show that the error of iteratively magnitude-pruned networks empirically follows a scaling law with interpretable coefficients that depend on the architecture and task. We functionally approximate the error of the pruned networks, showing it is predictable in terms of an invariant tying width, depth, and pruning level, such that networks of vastly different pruned densities are interchangeable. We demonstrate the accuracy of this approximation over orders of magnitude in depth, width, dataset size, and density. We show that the functional form holds (generalizes) for large scale data (e.g., ImageNet) and architectures (e.g., ResNets). As neural networks become ever larger and costlier to train, our findings suggest a framework for reasoning conceptually and analytically about a standard method for unstructured pruning.
Author Information
Jonathan Rosenfeld (MIT)
Jonathan Frankle (MIT CSAIL)
Michael Carbin (MIT)
Nir Shavit (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: On the Predictability of Pruning Across Scales »
Wed. Jul 21st 04:00 -- 06:00 AM Room
More from the Same Authors
-
2021 : Studying the Consistency and Composability of Lottery Ticket Pruning Masks »
Rajiv Movva · Michael Carbin · Jonathan Frankle -
2021 : On the Generalization Improvement from Neural Network Pruning »
Tian Jin · Gintare Karolina Dziugaite · Michael Carbin -
2022 : Pre-Training on a Data Diet: Identifying Sufficient Examples for Early Training »
Mansheej Paul · Brett Larsen · Surya Ganguli · Jonathan Frankle · Gintare Karolina Dziugaite -
2022 : Knowledge Distillation for Efficient Sequences of Training Runs »
Xingyu Liu · Xingyu Liu · Alexander Leonardi · Alexander Leonardi · Lu Yu · Lu Yu · Christopher Gilmer-Hill · Christopher Gilmer-Hill · Matthew Leavitt · Matthew Leavitt · Jonathan Frankle · Jonathan Frankle -
2023 : Distributions for Compositionally Differentiating Parametric Discontinuities »
Jesse Michel · Kevin Mu · Xuanda Yang · Sai Praveen Bangaru · Elias Rojas Collins · Gilbert Bernstein · Jonathan Ragan-Kelley · Michael Carbin · Tzu-Mao Li -
2023 : Can LLMs Generate Random Numbers? Evaluating LLM Sampling in Controlled Domains »
Aspen Hopkins · Alex Renda · Michael Carbin -
2022 Poster: What Can Linear Interpolation of Neural Network Loss Landscapes Tell Us? »
Tiffany Vlaar · Jonathan Frankle -
2022 Spotlight: What Can Linear Interpolation of Neural Network Loss Landscapes Tell Us? »
Tiffany Vlaar · Jonathan Frankle -
2021 : On the Generalization Improvement from Neural Network Pruning »
Tian Jin · Gintare Karolina Dziugaite · Michael Carbin -
2020 : Q&A: Jonathan Frankle »
Jonathan Frankle · Mayoore Jaiswal -
2020 : Contributed Talk: Jonathan Frankle »
Jonathan Frankle -
2020 Poster: Linear Mode Connectivity and the Lottery Ticket Hypothesis »
Jonathan Frankle · Gintare Karolina Dziugaite · Daniel Roy · Michael Carbin -
2019 Poster: Ithemal: Accurate, Portable and Fast Basic Block Throughput Estimation using Deep Neural Networks »
Charith Mendis · Alex Renda · Dr.Saman Amarasinghe · Michael Carbin -
2019 Oral: Ithemal: Accurate, Portable and Fast Basic Block Throughput Estimation using Deep Neural Networks »
Charith Mendis · Alex Renda · Dr.Saman Amarasinghe · Michael Carbin -
2017 Poster: Deep Tensor Convolution on Multicores »
David Budden · Alexander Matveev · Shibani Santurkar · Shraman Ray Chaudhuri · Nir Shavit -
2017 Talk: Deep Tensor Convolution on Multicores »
David Budden · Alexander Matveev · Shibani Santurkar · Shraman Ray Chaudhuri · Nir Shavit