Timezone: »

 
APP: Anytime Progressive Pruning
Diganta Misra · Bharat Runwal · Tianlong Chen · Zhangyang “Atlas” Wang · Irina Rish

Fri Jul 22 12:15 PM -- 01:15 PM (PDT) @
With the latest advances in deep learning, several methods have been investigated for optimal learning settings in scenarios where the data stream is continuous over time. However, sparse networks training in such settings have often been overlooked. In this paper, we explore the problem of training a neural network with a target sparsity in a particular case of online learning: the anytime learning at macroscale paradigm (ALMA). We propose a novel way of progressive pruning, referred to as \textit{Anytime Progressive Pruning} (APP); the proposed approach significantly outperforms the baseline dense and Anytime OSP models across multiple architectures and datasets under short, moderate, and long-sequence training. Our method, for example, shows an improvement in accuracy of $\approx 7\%$ and a reduction in the generalisation gap by $\approx 22\%$, while being $\approx 1/3$ rd the size of the dense baseline model in few-shot restricted imagenet training. The code and experiment dashboards can be accessed at \url{https://github.com/landskape-ai/Progressive-Pruning} and \url{https://wandb.ai/landskape/APP}, respectively.

Author Information

Diganta Misra (Mila)
Diganta Misra

I am a Research Masters student at MILA, Montreal with Prof Irina Rish working on continual learning and robustness. I also serve as a student researcher at Morgan Stanley where I work on information theoretic approaches on time series modality. In addition I am the founder of a non-profit research organisation called Landskape where I work on sparsity and reprogramming in active collaboration with IBM NY, VITA UT-Austin and Google Research.

Bharat Runwal (Indian Institute Of Technology(IIT) Delhi)
Tianlong Chen (University of Texas at Austin)
Zhangyang “Atlas” Wang (University of Texas at Austin)
Irina Rish (Mila/UdeM)

More from the Same Authors