Towards Fine-grained Robustness: Attention-guided Test-time Prompt Tuning for Vision-Language Models
Abstract
Visual-Language Models (VLMs), such as CLIP, have achieved significant zero-shot performance on downstream tasks with various fine-tuning adaptation methods. However, recent studies have proven that adversarial attacks can significantly degrade the inference ability of VLMs, posing substantial risks to their practical applications. Prevalent test-time adaptation methods typically rely on the multi-view augmentation to implement various fine-tuning strategies, which struggle to identify semantic information and are prone to destroy the discriminative regions in fine-grained scenarios. To address these limitations, we propose Attention-guided Test-time Prompt Tuning (A-TPT), a semantics-preserving method designed for test-time adaptation. We first refine the gradient attention rollout mechanism to identify semantically meaningful regions surviving under adversarial attacks. Furthermore, we leverage them to guide the spatially varying augmentation intensities and multi-view ensemble for prompt tuning and inference. Extensive experiments demonstrate that A-TPT outperforms existing test-time adaptation methods on both adversarial and clean data. Codes are available at https://anonymous.4open.science/r/A-TPT.