Timezone: »
The recent breakthrough achieved by contrastive learning accelerates the pace for deploying unsupervised training on real-world data applications. However, unlabeled data in reality is commonly imbalanced and shows a long-tail distribution, and it is unclear how robustly the latest contrastive learning methods could perform in the practical scenario. This paper proposes to explicitly tackle this challenge, via a principled framework called Self-Damaging Contrastive Learning (SDCLR), to automatically balance the representation learning without knowing the classes. Our main inspiration is drawn from the recent finding that deep models have difficult-to-memorize samples, and those may be exposed through network pruning. It is further natural to hypothesize that long-tail samples are also tougher for the model to learn well due to insufficient examples. Hence, the key innovation in SDCLR is to create a dynamic self-competitor model to contrast with the target model, which is a pruned version of the latter. During training, contrasting the two models will lead to adaptive online mining of the most easily forgotten samples for the current target model, and implicitly emphasize them more in the contrastive loss. Extensive experiments across multiple datasets and imbalance settings show that SDCLR significantly improves not only overall accuracies but also balancedness, in terms of linear evaluation on the full-shot and few-shot settings. Our code is available at https://github.com/VITA-Group/SDCLR.
Author Information
Ziyu Jiang (Texas A&M University)
Tianlong Chen (University of Texas at Austin)
Bobak Mortazavi (Texas A&M University)
Zhangyang “Atlas” Wang (University of Texas at Austin)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Self-Damaging Contrastive Learning »
Thu. Jul 22nd 04:00 -- 06:00 PM Room Virtual
More from the Same Authors
-
2022 : Invited talk #8 Atlas Wang. Title: “Free Knowledge” in Chest X-rays: Contrastive Learning of Images and Their Radiomics »
Zhangyang “Atlas” Wang -
2022 : APP: Anytime Progressive Pruning »
Diganta Misra · Bharat Runwal · Tianlong Chen · Zhangyang “Atlas” Wang · Irina Rish -
2022 Poster: Data-Efficient Double-Win Lottery Tickets from Robust Pre-training »
Tianlong Chen · Zhenyu Zhang · Sijia Liu · Yang Zhang · Shiyu Chang · Zhangyang “Atlas” Wang -
2022 Poster: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Spotlight: Data-Efficient Double-Win Lottery Tickets from Robust Pre-training »
Tianlong Chen · Zhenyu Zhang · Sijia Liu · Yang Zhang · Shiyu Chang · Zhangyang “Atlas” Wang -
2022 Spotlight: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Poster: Universality of Winning Tickets: A Renormalization Group Perspective »
William T. Redman · Tianlong Chen · Zhangyang “Atlas” Wang · Akshunna S. Dogra -
2022 Poster: VariGrow: Variational Architecture Growing for Task-Agnostic Continual Learning based on Bayesian Novelty »
Randy Ardywibowo · Zepeng Huo · Zhangyang “Atlas” Wang · Bobak Mortazavi · Shuai Huang · Xiaoning Qian -
2022 Poster: Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition »
Haotao Wang · Aston Zhang · Yi Zhu · Shuai Zheng · Mu Li · Alex Smola · Zhangyang “Atlas” Wang -
2022 Poster: Training Your Sparse Neural Network Better with Any Mask »
Ajay Jaiswal · Haoyu Ma · Tianlong Chen · Ying Ding · Zhangyang “Atlas” Wang -
2022 Oral: Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition »
Haotao Wang · Aston Zhang · Yi Zhu · Shuai Zheng · Mu Li · Alex Smola · Zhangyang “Atlas” Wang -
2022 Spotlight: Universality of Winning Tickets: A Renormalization Group Perspective »
William T. Redman · Tianlong Chen · Zhangyang “Atlas” Wang · Akshunna S. Dogra -
2022 Spotlight: Training Your Sparse Neural Network Better with Any Mask »
Ajay Jaiswal · Haoyu Ma · Tianlong Chen · Ying Ding · Zhangyang “Atlas” Wang -
2022 Spotlight: VariGrow: Variational Architecture Growing for Task-Agnostic Continual Learning based on Bayesian Novelty »
Randy Ardywibowo · Zepeng Huo · Zhangyang “Atlas” Wang · Bobak Mortazavi · Shuai Huang · Xiaoning Qian -
2022 Poster: Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets »
Tianlong Chen · Xuxi Chen · Xiaolong Ma · Yanzhi Wang · Zhangyang “Atlas” Wang -
2022 Poster: Removing Batch Normalization Boosts Adversarial Training »
Haotao Wang · Aston Zhang · Shuai Zheng · Xingjian Shi · Mu Li · Zhangyang “Atlas” Wang -
2022 Poster: Neural Implicit Dictionary Learning via Mixture-of-Expert Training »
Peihao Wang · Zhiwen Fan · Tianlong Chen · Zhangyang “Atlas” Wang -
2022 Spotlight: Removing Batch Normalization Boosts Adversarial Training »
Haotao Wang · Aston Zhang · Shuai Zheng · Xingjian Shi · Mu Li · Zhangyang “Atlas” Wang -
2022 Spotlight: Neural Implicit Dictionary Learning via Mixture-of-Expert Training »
Peihao Wang · Zhiwen Fan · Tianlong Chen · Zhangyang “Atlas” Wang -
2022 Spotlight: Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets »
Tianlong Chen · Xuxi Chen · Xiaolong Ma · Yanzhi Wang · Zhangyang “Atlas” Wang -
2021 Poster: Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm »
Mingkang Zhu · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Oral: Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm »
Mingkang Zhu · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Poster: Graph Contrastive Learning Automated »
Yuning You · Tianlong Chen · Yang Shen · Zhangyang “Atlas” Wang -
2021 Oral: Graph Contrastive Learning Automated »
Yuning You · Tianlong Chen · Yang Shen · Zhangyang “Atlas” Wang -
2021 Poster: A Unified Lottery Ticket Hypothesis for Graph Neural Networks »
Tianlong Chen · Yongduo Sui · Xuxi Chen · Aston Zhang · Zhangyang “Atlas” Wang -
2021 Poster: Efficient Lottery Ticket Finding: Less Data is More »
Zhenyu Zhang · Xuxi Chen · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Spotlight: Efficient Lottery Ticket Finding: Less Data is More »
Zhenyu Zhang · Xuxi Chen · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Spotlight: A Unified Lottery Ticket Hypothesis for Graph Neural Networks »
Tianlong Chen · Yongduo Sui · Xuxi Chen · Aston Zhang · Zhangyang “Atlas” Wang -
2020 Poster: Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training »
Xuxi Chen · Wuyang Chen · Tianlong Chen · Ye Yuan · Chen Gong · Kewei Chen · Zhangyang “Atlas” Wang -
2020 Poster: When Does Self-Supervision Help Graph Convolutional Networks? »
Yuning You · Tianlong Chen · Zhangyang “Atlas” Wang · Yang Shen -
2020 Poster: Automated Synthetic-to-Real Generalization »
Wuyang Chen · Zhiding Yu · Zhangyang “Atlas” Wang · Anima Anandkumar -
2020 Poster: Eliminating the Invariance on the Loss Landscape of Linear Autoencoders »
Reza Oftadeh · Jiayi Shen · Zhangyang “Atlas” Wang · Dylan Shell -
2020 Poster: BoXHED: Boosted eXact Hazard Estimator with Dynamic covariates »
Xiaochen Wang · Arash Pakbin · Bobak Mortazavi · Hongyu Zhao · Donald Lee -
2020 Poster: NADS: Neural Architecture Distribution Search for Uncertainty Awareness »
Randy Ardywibowo · Shahin Boluki · Xinyu Gong · Zhangyang “Atlas” Wang · Xiaoning Qian -
2020 Poster: AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks »
Yonggan Fu · Wuyang Chen · Haotao Wang · Haoran Li · Yingyan Lin · Zhangyang “Atlas” Wang