Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning

Benchmarking the Reliability of Post-training Quantization: a Particular Focus on Worst-case Performance

Zhihang Yuan · Jiawei Liu · Jiaxiang Wu · Dawei Yang · Qiang Wu · Guangyu Sun · Wenyu Liu · Xinggang Wang · Bingzhe Wu

Keywords: [ Worst-case Performance ] [ Post-training Quantization ] [ Reliability of Neural Network ] [ distribution shift ]


Abstract:

The reliability of post-training quantization (PTQ) methods in the face of extreme cases such as distribution shift and data noise remains largely unexplored, despite the popularity of PTQ as a method for compressing deep neural networks (DNNs) without altering their original architecture or training procedures. This paper conducts an investigation on commonly-used PTQ methods, addressing research questions pertaining to the impact of calibration set distribution variations, calibration paradigm selection, and data augmentation or sampling strategies on the reliability of PTQ. Through a systematic evaluation process encompassing various tasks and commonly-used PTQ paradigms, it is evident that the majority of existing PTQ methods lack the necessary reliability for worst-case group performance, underscoring the imperative for more robust approaches.

Chat is not available.