Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Dynamic Neural Networks

Just-in-Time Sparsity: Learning Dynamic Sparsity Schedules

· Chiratidzo Matowe · Arnu Pretorius · Benjamin Rosman · Sara Hooker


Abstract:

Sparse neural networks have various computational benefits while often being able to maintain or improve the generalization performance of their dense counterparts. Popular sparsification methods have focused on what to sparsify, i.e. which redundant components to remove from neural networks, while when to sparsify, has received less attention and is usually handled using heuristics or simple schedules. In this work, we focus on learning sparsity schedules from scratch using reinforcement learning. In simple CNNs and ResNet-18, we show that our learned schedules are diverse across layers and training steps, while achieving competitive performance when compared to naive handcrafted schedules. Our methodology is general-purpose and can be applied to learning effective sparsity schedules across any pruning implementation.

Chat is not available.