Timezone: »
The deep layers of modern neural networks extract a rather rich set of features as an input propagates through the network, this paper sets out to harvest these rich intermediate representations for quantization with minimal accuracy loss while significantly reducing the memory footprint and compute intensity of the DNN. This paper utilizes knowledge distillation through teacher-student paradigm (Hinton et al., 2015) in a novel setting that exploits the feature extraction capability of DNNs for higher accuracy quantization. As such, our algorithm logically divides a pretrained full-precision DNN to multiple sections, each of which exposes intermediate features to train a team of students independently in the quantized domain and simply stitching them afterwards. This divide and conquer strategy, makes the training of each student section possible in isolation, speeding up training by enabling parallelization. Experiments on various DNNs (AlexNet, LeNet, MobileNet, ResNet-18, ResNet-20, SVHN and VGG-11) show that, this approach—called DCQ (Divide and Conquer Quantization)—on average, improves the performance of a state-of-the-art quantized training technique, DoReFa-Net (Zhou et al., 2016) by 21.6% and 9.3% for binary and ternary quantization, respectively. Additionally, we show that incorporating DCQ to existing quantized training methods leads to improved accuracies as compared to previously reported by multiple state-of-the-art quantized training methods.
Author Information
Ahmed T. Elthakeb (University of California, San Diego)
Prannoy Pilligundla (University of California, San Diego)
FatemehSadat Mireshghallah (University of California San Diego)
Alexander Cloninger (University of California San Diego)
Hadi Esmaeilzadeh (University of California, San Diego)
More from the Same Authors
-
2021 : DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy? »
Archit Uniyal · Rakshit Naidu · Sasikanth Kotti · Patrik Joslin Kenfack · Sahib Singh · FatemehSadat Mireshghallah -
2021 : Benchmarking Differential Privacy and Federated Learning for BERT Models »
Priyam Basu · Rakshit Naidu · Zumrut Muftuoglu · Sahib Singh · FatemehSadat Mireshghallah -
2022 : Memorization in NLP Fine-tuning Methods »
FatemehSadat Mireshghallah · FatemehSadat Mireshghallah · Archit Uniyal · Archit Uniyal · Tianhao Wang · Tianhao Wang · David Evans · David Evans · Taylor Berg-Kirkpatrick · Taylor Berg-Kirkpatrick -
2023 : Talk »
FatemehSadat Mireshghallah -
2023 Workshop: Generative AI and Law (GenLaw) »
Katherine Lee · A. Feder Cooper · FatemehSadat Mireshghallah · Madiha Zahrah · James Grimmelmann · David Mimno · Deep Ganguli · Ludwig Schubert -
2022 : Evaluating Disentanglement in Generative Models Without Knowledge of Latent Factors »
Chester Holtz · Gal Mishne · Alexander Cloninger -
2022 : Closing Remarks and Transition to Poster Session »
Tegan Emerson · Henry Kvinge · Tim Doster · Sarah Tymochko · Alexander Cloninger -
2022 Workshop: Topology, Algebra, and Geometry in Machine Learning (TAG-ML) »
Tegan Emerson · Tim Doster · Henry Kvinge · Alexander Cloninger · Sarah Tymochko -
2022 : Welcome and Comments from the Organizer »
Tegan Emerson · Henry Kvinge · Tim Doster · Sarah Tymochko · Alexander Cloninger