Timezone: »
Self-supervised contrastive pre-training is a powerful tool to learn visual representation without human labels. Prior works have primarily focused on the recognition accuracy of contrastive learning but have overlooked other behavioral aspects. Besides accuracy, robustness plays a critical role in machine learning's reliability. We design and conduct a series of robustness tests to quantify the robustness difference between contrastive learning and supervised learning. These tests leverage data corruptions at multiple levels, ranging from pixel-level to patch-level and dataset-level, of either downstream or pre-training data. Our tests unveil intriguing robustness behaviors of contrastive and supervised learning. On one hand, under downstream corruptions, contrastive learning is surprisingly more robust than supervised learning. On the other hand, under pre-training corruptions, contrastive learning is vulnerable to patch shuffling and pixel intensity change, yet less sensitive to dataset-level distribution change. We analyze these results through the role of data augmentation and feature properties which have implications on improving supervised pre-training's downstream robustness.
Author Information
Yuanyi Zhong (University of Illinois at Urbana-Champaign)
Haoran Tang (University of Pennsylvania, University of Pennsylvania)
Junkun Chen (Tsinghua University, Tsinghua University)
Jian Peng (UIUC)
Yu-Xiong Wang (University of Illinois at Urbana-Champaign)
More from the Same Authors
-
2021 : Coordinate-wise Control Variates for Deep Policy Gradients »
Yuanyi Zhong · Yuan Zhou · Jian Peng -
2022 Poster: Off-Policy Reinforcement Learning with Delayed Rewards »
Beining Han · Zhizhou Ren · Zuofan Wu · Yuan Zhou · Jian Peng -
2022 Spotlight: Off-Policy Reinforcement Learning with Delayed Rewards »
Beining Han · Zhizhou Ren · Zuofan Wu · Yuan Zhou · Jian Peng -
2022 Poster: Proximal Exploration for Model-guided Protein Sequence Design »
Zhizhou Ren · Jiahan Li · Fan Ding · Yuan Zhou · Jianzhu Ma · Jian Peng -
2022 Poster: Pocket2Mol: Efficient Molecular Sampling Based on 3D Protein Pockets »
Xingang Peng · Shitong Luo · Jiaqi Guan · Qi Xie · Jian Peng · Jianzhu Ma -
2022 Spotlight: Pocket2Mol: Efficient Molecular Sampling Based on 3D Protein Pockets »
Xingang Peng · Shitong Luo · Jiaqi Guan · Qi Xie · Jian Peng · Jianzhu Ma -
2022 Spotlight: Proximal Exploration for Model-guided Protein Sequence Design »
Zhizhou Ren · Jiahan Li · Fan Ding · Yuan Zhou · Jianzhu Ma · Jian Peng -
2022 Poster: Generative Modeling for Multi-task Visual Learning »
Zhipeng Bao · Martial Hebert · Yu-Xiong Wang -
2022 Spotlight: Generative Modeling for Multi-task Visual Learning »
Zhipeng Bao · Martial Hebert · Yu-Xiong Wang -
2020 Poster: A Chance-Constrained Generative Framework for Sequence Optimization »
Xianggen Liu · Qiang Liu · Sen Song · Jian Peng -
2019 Poster: Quantile Stein Variational Gradient Descent for Batch Bayesian Optimization »
Chengyue Gong · Jian Peng · Qiang Liu -
2019 Poster: A Gradual, Semi-Discrete Approach to Generative Network Training via Explicit Wasserstein Minimization »
Yucheng Chen · Matus Telgarsky · Chao Zhang · Bolton Bailey · Daniel Hsu · Jian Peng -
2019 Oral: Quantile Stein Variational Gradient Descent for Batch Bayesian Optimization »
Chengyue Gong · Jian Peng · Qiang Liu -
2019 Oral: A Gradual, Semi-Discrete Approach to Generative Network Training via Explicit Wasserstein Minimization »
Yucheng Chen · Matus Telgarsky · Chao Zhang · Bolton Bailey · Daniel Hsu · Jian Peng -
2018 Poster: Learning to Explore via Meta-Policy Gradient »
Tianbing Xu · Qiang Liu · Liang Zhao · Jian Peng -
2018 Oral: Learning to Explore via Meta-Policy Gradient »
Tianbing Xu · Qiang Liu · Liang Zhao · Jian Peng