Timezone: »
Contrastive adversarial training has successfully improved the robustness of contrastive learning (CL). However, the robustness metric used in these methods is linked to attack algorithms, image labels and downstream tasks, all of which may affect the consistency and reliability of robustness metric for CL. To address these problems, this paper proposes a novel Robustness Verification framework for Contrastive Learning (RVCL). Furthermore, we use extreme value theory to reveal the relationship between the robust radius of the CL encoder and that of the supervised downstream task. Extensive experimental results on various benchmark models and datasets verify our theoretical findings, and further demonstrate that our proposed RVCL is able to evaluate the robustness of both models and images. Our code is available at https://github.com/wzekai99/RVCL.
Author Information
Zekai Wang (Wuhan University)
Weiwei Liu (Wuhan University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Robustness Verification for Contrastive Learning »
Wed. Jul 20th through Thu the 21st Room Hall E #420
More from the Same Authors
-
2022 : Robustness Verification for Contrastive Learning »
Zekai Wang · Weiwei Liu -
2023 Poster: Better Diffusion Models Further Improve Adversarial Training »
Zekai Wang · Tianyu Pang · Chao Du · Min Lin · Weiwei Liu · Shuicheng YAN -
2023 Poster: Delving into Noisy Label Detection with Clean Data »
Chenglin Yu · Xinsong Ma · Weiwei Liu -
2023 Poster: DDGR: Continual Learning with Deep Diffusion-based Generative Replay »
Rui Gao · Weiwei Liu -
2023 Oral: Delving into Noisy Label Detection with Clean Data »
Chenglin Yu · Xinsong Ma · Weiwei Liu -
2020 Poster: Adaptive Adversarial Multi-task Representation Learning »
YUREN MAO · Weiwei Liu · Xuemin Lin -
2019 Poster: Sparse Extreme Multi-label Learning with Oracle Property »
Weiwei Liu · Xiaobo Shen -
2019 Oral: Sparse Extreme Multi-label Learning with Oracle Property »
Weiwei Liu · Xiaobo Shen