Timezone: »

 
Spotlight
Winograd Algorithm for AdderNet
Wenshuo Li · Hanting Chen · Mingqiang Huang · Xinghao Chen · Chunjing Xu · Yunhe Wang

Tue Jul 20 07:20 AM -- 07:25 AM (PDT) @

Adder neural network (AdderNet) is a new kind of deep model that replaces the original massive multiplications in convolutions by additions while preserving the high performance. Since the hardware complexity of additions is much lower than that of multiplications, the overall energy consumption is thus reduced significantly. To further optimize the hardware overhead of using AdderNet, this paper studies the winograd algorithm, which is a widely used fast algorithm for accelerating convolution and saving the computational costs. Unfortunately, the conventional Winograd algorithm cannot be directly applied to AdderNets since the distributive law in multiplication is not valid for the l1-norm. Therefore, we replace the element-wise multiplication in the Winograd equation by additions and then develop a new set of transform matrixes that can enhance the representation ability of output features to maintain the performance. Moreover, we propose the l2-to-l1 training strategy to mitigate the negative impacts caused by formal inconsistency. Experimental results on both FPGA and benchmarks show that the new method can further reduce the energy consumption without affecting the accuracy of the original AdderNet.

Author Information

Wenshuo Li (Huawei)
Hanting Chen (Peking University)
Mingqiang Huang (Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences)
Xinghao Chen (Noah's Ark Lab, Huawei Technologies)
Chunjing Xu (Huawei Noah's Ark Lab)
Yunhe Wang (Noah's Ark Lab, Huawei Technologies.)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors