Selective Coupling of Decoupled Informative Regions: Masked Attention Alignment for Data-Free Quantization of Vision Transformers
Biao Qian ⋅ Yang Wang ⋅ Yong Wu ⋅ Jungong Han
Abstract
Data-Free Quantization (DFQ) addresses data security concerns by synthesizing fake samples, without accessing real data. It has garnered increasing attention in the context of Vision Transformers (ViTs), owing to the superiority of the self-attention mechanism compared to classical convolutional operation. However, previous DFQ arts for ViTs often suffer from a distribution mismatch between synthetic samples and input distribution expected by quantized models $Q$, resulting into the suboptimal performance. In this paper, we propose a novel Masked Attention Alignment approach for Data-Free Quantization of ViTs, named MaskAQ, revealing that: 1) the semantics in the self-attention mechanism is predominantly localized to a sparse subset of patches, called informative regions; 2) the informative regions dominate the mutual information between synthetic samples and $Q$'s outputs. To these ends, we incorporate differential entropy maximum over patch similarity of synthetic samples, which decouples informative regions from noisy background. To couple with varied $Q$, the informative regions are picked out to align full-precision models with $Q$ via a masked attention alignment objective, thus yielding high-quality synthetic samples. To further preserve mutual information between synthetic samples and updating $Q$, a periodic sample refreshing strategy comes up to endow MaskAQ with the capacity to continually adapt to the evolving state of $Q$ throughout the training process. Extensive experiments verify the merits of MaskAQ over state-of-the-art approaches across multiple backbones and downstream tasks, with Top-1 accuracy gain of up to 3.1% on ImageNet. Our code is available in supplementary material package.
Successful Page Load