Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Neural Compression: From Information Theory to Applications

Less-Energy-Usage Network with Batch Power Iteration

Hao Huang · Tapan Shah · Shinjae Yoo · Scott Evans


Abstract:

Large scale neural networks are among the mainstream tools of modern big data analytics. But their training and inference phase are accompanied by huge energy consumption and carbon footprint. The energy efficiency, running time complexity and model storage size are three major considerations of using deep neural networks in modern applications. Here we introduce Less-Energy-Usage Network, or LEAN. Different from classic network compression (e.g. pruning and knowledge distillation) that transform a pre-trained huge network to a smaller network, our method is to build a lean and effective network during training phase. It is based on spectral theory and batch power iteration learning. This technique can be applied to almost any type of neural networks to reduce their sizes. Preliminary experiment results show that our LEAN consumes 30% less energy, while achieves 95% of the baseline accuracy with 1.5x speed-up and up to 90% less parameters compared against the baseline CNN model.

Chat is not available.