Timezone: »
Quantization is promising in enabling powerful yet complex deep neural networks (DNNs) to be deployed into resource constrained platforms. However, quantized DNNs are vulnerable to adversarial attacks unless being equipped with sophisticated techniques, leading to a dilemma of struggling between DNNs' efficiency and robustness. In this work, we demonstrate a new perspective regarding quantization's role in DNNs' robustness, advocating that quantization can be leveraged to largely boost DNNs’ robustness, and propose a framework dubbed Double-Win Quant that can boost the robustness of quantized DNNs over their full precision counterparts by a large margin. Specifically, we for the first time identify that when an adversarially trained model is quantized to different precisions in a post-training manner, the associated adversarial attacks transfer poorly between different precisions. Leveraging this intriguing observation, we further develop Double-Win Quant integrating random precision inference and training to further reduce and utilize the poor adversarial transferability, enabling an aggressive ``win-win" in terms of DNNs' robustness and efficiency. Extensive experiments and ablation studies consistently validate Double-Win Quant's effectiveness and advantages over state-of-the-art (SOTA) adversarial training methods across various attacks/models/datasets. Our codes are available at: https://github.com/RICE-EIC/Double-Win-Quant.
Author Information
Yonggan Fu (Rice University)
Qixuan Yu (Rice University)
Meng Li (Facebook Inc)
Vikas Chandra (Facebook)
Yingyan Lin (Rice University)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Double-Win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference »
Fri. Jul 23rd 02:40 -- 02:45 AM Room
More from the Same Authors
-
2022 Poster: ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks »
Haoran You · Baopu Li · Shi Huihong · Yonggan Fu · Yingyan Lin -
2022 Poster: DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks »
Yonggan Fu · Haichuan Yang · Jiayi Yuan · Meng Li · Cheng Wan · Raghuraman Krishnamoorthi · Vikas Chandra · Yingyan Lin -
2022 Spotlight: DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks »
Yonggan Fu · Haichuan Yang · Jiayi Yuan · Meng Li · Cheng Wan · Raghuraman Krishnamoorthi · Vikas Chandra · Yingyan Lin -
2022 Spotlight: ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks »
Haoran You · Baopu Li · Shi Huihong · Yonggan Fu · Yingyan Lin -
2021 Poster: AlphaNet: Improved Training of Supernets with Alpha-Divergence »
Dilin Wang · Chengyue Gong · Meng Li · Qiang Liu · Vikas Chandra -
2021 Poster: Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators »
Yonggan Fu · Yongan Zhang · Yang Zhang · David Cox · Yingyan Lin -
2021 Oral: AlphaNet: Improved Training of Supernets with Alpha-Divergence »
Dilin Wang · Chengyue Gong · Meng Li · Qiang Liu · Vikas Chandra -
2021 Spotlight: Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators »
Yonggan Fu · Yongan Zhang · Yang Zhang · David Cox · Yingyan Lin -
2020 Poster: AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks »
Yonggan Fu · Wuyang Chen · Haotao Wang · Haoran Li · Yingyan Lin · Zhangyang “Atlas” Wang -
2019 Workshop: Joint Workshop on On-Device Machine Learning & Compact Deep Neural Network Representations (ODML-CDNNR) »
Sujith Ravi · Zornitsa Kozareva · Lixin Fan · Max Welling · Yurong Chen · Werner Bailer · Brian Kulis · Haoji Hu · Jonathan Dekhtiar · Yingyan Lin · Diana Marculescu -
2018 Poster: Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions »
Junru Wu · Yue Wang · Zhenyu Wu · Zhangyang Wang · Ashok Veeraraghavan · Yingyan Lin -
2018 Oral: Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions »
Junru Wu · Yue Wang · Zhenyu Wu · Zhangyang Wang · Ashok Veeraraghavan · Yingyan Lin