Timezone: »

 
Oral
Robustness Verification for Contrastive Learning
Zekai Wang · Weiwei Liu

Wed Jul 20 10:50 AM -- 11:10 AM (PDT) @ Room 310

Contrastive adversarial training has successfully improved the robustness of contrastive learning (CL). However, the robustness metric used in these methods is linked to attack algorithms, image labels and downstream tasks, all of which may affect the consistency and reliability of robustness metric for CL. To address these problems, this paper proposes a novel Robustness Verification framework for Contrastive Learning (RVCL). Furthermore, we use extreme value theory to reveal the relationship between the robust radius of the CL encoder and that of the supervised downstream task. Extensive experimental results on various benchmark models and datasets verify our theoretical findings, and further demonstrate that our proposed RVCL is able to evaluate the robustness of both models and images. Our code is available at https://github.com/wzekai99/RVCL.

Author Information

Zekai Wang (Wuhan University)
Weiwei Liu (Wuhan University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors