Timezone: »

 
Workshop
Hardware-aware efficient training (HAET)
Gonçalo Mordido · Yoshua Bengio · Ghouthi BOUKLI HACENE · Vincent Gripon · François Leduc-Primeau · Vahid Partovi Nia · Julie Grollier

Sat Jul 23 05:45 AM -- 02:30 PM (PDT) @ Room 327 - 329
Event URL: https://haet2022.github.io/ »

To reach top-tier performance, deep learning models usually require a large number of parameters and operations, using considerable power and memory. Several methods have been proposed to tackle this problem by leveraging quantization of parameters, pruning, clustering of parameters, decompositions of convolutions, or using distillation. However, most of these works focus mainly on improving efficiency at inference time, disregarding the training cost. In practice, however, most of the energy footprint of deep learning results from training. Hence, this workshop focuses on reducing the training complexity of deep neural networks. Our aim is to encourage submissions specifically concerning the reduction in energy, time, or memory usage at training time. Topics of interest include but are not limited to: (i) compression methods for memory and complexity reduction during training, (ii) energy-efficient hardware architectures, (iii) energy-efficient training algorithms, (iv) novel energy models or energy efficiency training benchmarks, (v) practical applications of low-energy training.

Author Information

Gonçalo Mordido (Mila & Polytechnique Montreal)
Yoshua Bengio (Mila - Quebec AI Institute)
Ghouthi BOUKLI HACENE (SONY)
Vincent Gripon (IMT Atlantique)
François Leduc-Primeau (Polytechnique Montreal)
Vahid Partovi Nia (Polytechnique Montreal)
Julie Grollier (Unité Mixte CNRS/Thalès)

More from the Same Authors