Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Oral
Wed Jun 12 12:10 PM -- 12:15 PM (PDT) @ Room 201
Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization
Hesham Mostafa · Xin Wang
[ Slides [ Video

Deep neural networks are typically highly over-parameterized with pruning techniques able to remove a significant fraction of network parameters with little loss in accuracy. Recently, techniques based on dynamic re-allocation of non-zero parameters have emerged for training sparse networks directly without having to train a large dense model beforehand. We present a parameter re-allocation scheme that addresses the limitations of previous methods such as their high computational cost and the fixed number of parameters they allocate to each layer. We investigate the performance of these dynamic re-allocation methods in deep convolutional networks and show that our method outperforms previous static and dynamic parameterization methods, yielding the best accuracy for a given number of training parameters, and performing on par with networks obtained by iteratively pruning a trained dense model. We further investigated the mechanisms underlying the superior performance of the resulting sparse networks. We found that neither the structure, nor the initialization of the sparse networks discovered by our parameter reallocation scheme are sufficient to explain their superior generalization performance. Rather, it is the continuous exploration of different sparse network structures during training that is critical to effective learning. We show that it is more fruitful to explore these structural degrees of freedom than to add extra parameters to the network. Code used to run all experiments is available under the anonymous repository: https://gitlab.com/anonymous.icml.2019/dynamic-parameterization-icml19.