Poster
in
Workshop: 2nd Workshop on Advancing Neural Network Training : Computational Efficiency, Scalability, and Resource Optimization (WANT@ICML 2024)
An Analytical Approach to Enhancing DNN Efficiency and Accuracy Using Approximate Multiplication
Salar Shakibhamedan · Anice Jahanjoo · Amin Aminifar · Nima Amirafshar · Nima TaheriNejad · Axel Jantsch
Achieving higher accuracy in Deep Neural Net-works (DNNs) often reaches a plateau despite ex-tensive training, retraining, and fine-tuning. Thispaper introduces an analytical study using approx-imate multipliers to investigate potential accuracyimprovements. Leveraging the principles of theInformation Bottleneck (IB) theory, we analyzethe enhanced information and feature extractioncapabilities provided by approximate multipliers.Through Information Plane (IP) analysis, we gaina detailed understanding of DNN behavior un-der this approach. Our analysis indicates thatthis technique can break through existing accu-racy barriers while offering computational andenergy efficiency benefits. Compared to tradi-tional methods that are computationally intensive,our approach uses less demanding optimizationtechniques. Additionally, approximate multiplierscontribute to reduced energy consumption duringboth the training and inference phases. Experi-mental results support the potential of this method,suggesting it is a promising direction for DNNoptimization.