Timezone: »
Visual Prompt Tuning (VPT) is an effective tuning method for adapting pretrained Vision Transformers (ViTs) to downstream tasks. It leverages extra learnable tokens, known as prompts, which steer the frozen pretrained ViTs. Although VPT has demonstrated its applicability with supervised vision transformers, it often underperforms with self-supervised ones. Through empirical observations, we deduce that the effectiveness of VPT hinges largely on the ViT blocks with which the prompt tokens interact. Specifically, VPT shows improved performance on image classification tasks for MAE and MoCo v3 when the prompt tokens are inserted into later blocks rather than the first block. These observations suggest that there exists an optimal location of blocks for the insertion of prompt tokens. Unfortunately, identifying the optimal blocks for prompts within each self-supervised ViT for diverse future scenarios is a costly process. To mitigate this problem, we propose a simple yet effective method that learns a gate for each ViT block to adjust its intervention into the prompt tokens. With our method, prompt tokens are selectively influenced by blocks that require steering for task adaptation. Our method outperforms VPT variants in FGVC and VTAB image classification and ADE20K semantic segmentation. The code is available at https://github.com/ryongithub/GatedPromptTuning.
Author Information
Seungryong Yoo
Eunji Kim (Seoul National University)
Dahuin Jung (Seoul National University)
JUNGBEOM LEE (Seoul National University)
Sungroh Yoon (Seoul National University)
More from the Same Authors
-
2023 : De-stereotyping Text-to-image Models through Prompt Tuning »
Eunji Kim · Siwon Kim · Chaehun Shin · Sungroh Yoon -
2023 Poster: On the Impact of Knowledge Distillation for Model Interpretability »
Hyeongrok Han · Siwon Kim · Hyun-Soo Choi · Sungroh Yoon -
2023 Poster: Probabilistic Concept Bottleneck Models »
Eunji Kim · Dahuin Jung · Sangha Park · Siwon Kim · Sungroh Yoon -
2022 Poster: AutoSNN: Towards Energy-Efficient Spiking Neural Networks »
Byunggook Na · Jisoo Mok · Seongsik Park · Dongjin Lee · Hyeokjun Choe · Sungroh Yoon -
2022 Poster: Dataset Condensation with Contrastive Signals »
Saehyung Lee · SANGHYUK CHUN · Sangwon Jung · Sangdoo Yun · Sungroh Yoon -
2022 Spotlight: Dataset Condensation with Contrastive Signals »
Saehyung Lee · SANGHYUK CHUN · Sangwon Jung · Sangdoo Yun · Sungroh Yoon -
2022 Spotlight: AutoSNN: Towards Energy-Efficient Spiking Neural Networks »
Byunggook Na · Jisoo Mok · Seongsik Park · Dongjin Lee · Hyeokjun Choe · Sungroh Yoon -
2022 Poster: Guided-TTS: A Diffusion Model for Text-to-Speech via Classifier Guidance »
Heeseung Kim · Sungwon Kim · Sungroh Yoon -
2022 Spotlight: Guided-TTS: A Diffusion Model for Text-to-Speech via Classifier Guidance »
Heeseung Kim · Sungwon Kim · Sungroh Yoon -
2022 Poster: Confidence Score for Source-Free Unsupervised Domain Adaptation »
Jonghyun Lee · Dahuin Jung · Junho Yim · Sungroh Yoon -
2022 Spotlight: Confidence Score for Source-Free Unsupervised Domain Adaptation »
Jonghyun Lee · Dahuin Jung · Junho Yim · Sungroh Yoon -
2019 Poster: FloWaveNet : A Generative Flow for Raw Audio »
Sungwon Kim · Sang-gil Lee · Jongyoon Song · Jaehyeon Kim · Sungroh Yoon -
2019 Oral: FloWaveNet : A Generative Flow for Raw Audio »
Sungwon Kim · Sang-gil Lee · Jongyoon Song · Jaehyeon Kim · Sungroh Yoon -
2019 Poster: HexaGAN: Generative Adversarial Nets for Real World Classification »
Uiwon Hwang · Dahuin Jung · Sungroh Yoon -
2019 Oral: HexaGAN: Generative Adversarial Nets for Real World Classification »
Uiwon Hwang · Dahuin Jung · Sungroh Yoon