Timezone: »

On the Predictability of Pruning Across Scales
Jonathan Rosenfeld · Jonathan Frankle · Michael Carbin · Nir Shavit

Tue Jul 20 05:25 PM -- 05:30 PM (PDT) @

We show that the error of iteratively magnitude-pruned networks empirically follows a scaling law with interpretable coefficients that depend on the architecture and task. We functionally approximate the error of the pruned networks, showing it is predictable in terms of an invariant tying width, depth, and pruning level, such that networks of vastly different pruned densities are interchangeable. We demonstrate the accuracy of this approximation over orders of magnitude in depth, width, dataset size, and density. We show that the functional form holds (generalizes) for large scale data (e.g., ImageNet) and architectures (e.g., ResNets). As neural networks become ever larger and costlier to train, our findings suggest a framework for reasoning conceptually and analytically about a standard method for unstructured pruning.

Author Information

Jonathan Rosenfeld (MIT)
Jonathan Frankle (MIT CSAIL)
Michael Carbin (MIT)
Nir Shavit (MIT)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors