Condition Number Based Low-Bit Quantization for Image Super-Resolution
Kai Liu ⋅ Dehui Wang ⋅ Zhiteng Li ⋅ Zheng Chen ⋅ Yong Guo ⋅ Linghe Kong
Abstract
Low-bit model quantization for image super-resolution (SR) is a longstanding task that is renowned for its surprising compression and acceleration ability. However, accuracy degradation is inevitable when compressing the full-precision (FP) model to ultra-low bit widths ($2\sim4$ bits). Experimentally, we observe that the degradation of quantization is mainly attributed to the quantization of activation instead of model weights. Considering that the activation quantization error is hard to minimize, minimizing the impact of the error emerges as a good choice, which is described by the condition number. Therefore, we propose CondiQuant, a condition number-based low-bit post-training quantization for image super-resolution. Specifically, we formulate the quantization error of activation as the condition number of weight metrics. By decoupling the representation ability and the quantization sensitivity, we design an efficient proximal gradient descent algorithm to iteratively minimize the condition number and maintain the output. With comprehensive experiments, we demonstrate that CondiQuant outperforms existing state-of-the-art post-training quantization methods in accuracy without computation overhead and gains the theoretically optimal compression ratio in model parameters. Our code will be released soon.
Successful Page Load