Timezone: »
Pre-training serves as a broadly adopted starting point for transfer learning on various downstream tasks. Recent investigations of lottery tickets hypothesis (LTH) demonstrate such enormous pre-trained models can be replaced by extremely sparse subnetworks (a.k.a. matching subnetworks) without sacrificing transferability. However, practical security-crucial applications usually pose more challenging requirements beyond standard transfer, which also demand these subnetworks to overcome adversarial vulnerability. In this paper, we formulate a more rigorous concept, Double-Win Lottery Tickets, in which a located subnetwork from a pre-trained model can be independently transferred on diverse downstream tasks, to reach BOTH the same standard and robust generalization, under BOTH standard and adversarial training regimes, as the full pre-trained model can do. We comprehensively examine various pre-training mechanisms and find that robust pre-training tends to craft sparser double-win lottery tickets with superior performance over the standard counterparts. For example, on downstream CIFAR-10/100 datasets, we identify double-win matching subnetworks with the standard, fast adversarial, and adversarial pre-training from ImageNet, at 89.26%/73.79%, 89.26%/79.03%, and 91.41%/83.22% sparsity, respectively. Furthermore, we observe the obtained double-win lottery tickets can be more data-efficient to transfer, under practical data-limited (e.g., 1% and 10%) downstream schemes. Our results show that the benefits from robust pre-training are amplified by the lottery ticket scheme, as well as the data-limited transfer setting. Codes are available at https://github.com/VITA-Group/Double-Win-LTH.
Author Information
Tianlong Chen (University of Texas at Austin)
Zhenyu Zhang (University of Science and Technology of China)
Sijia Liu (Michigan State University)
Yang Zhang (MIT-IBM Watson AI Lab)
Shiyu Chang (UCSB)
Zhangyang “Atlas” Wang (University of Texas at Austin)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Data-Efficient Double-Win Lottery Tickets from Robust Pre-training »
Thu. Jul 21st through Fri the 22nd Room Hall E #505
More from the Same Authors
-
2022 : Saliency Guided Adversarial Training for Tackling Generalization Gap with Applications to Medical Imaging Classification System »
Xin Li · Yao Qiang · CHNEGYIN LI · Sijia Liu · Dongxiao Zhu -
2023 : H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models »
Zhenyu Zhang · Ying Sheng · Tianyi Zhou · Tianlong Chen · Lianmin Zheng · Ruisi Cai · Zhao Song · Yuandong Tian · Christopher Re · Clark Barrett · Zhangyang “Atlas” Wang · Beidi Chen -
2023 : Atlas Wang »
Zhangyang “Atlas” Wang -
2023 Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning »
Sijia Liu · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Kathrin Grosse · Baharan Mirzasoleiman · Sanmi Koyejo -
2023 Oral: Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks »
Mohammed Nowaz Rabbani Chowdhury · Shuai Zhang · Meng Wang · Sijia Liu · Pin-Yu Chen -
2023 Poster: Learning to Optimize Differentiable Games »
Xuxi Chen · Nelson Vadori · Tianlong Chen · Zhangyang “Atlas” Wang -
2023 Poster: Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights? »
Ruisi Cai · Zhenyu Zhang · Zhangyang “Atlas” Wang -
2023 Poster: Linearly Constrained Bilevel Optimization: A Smoothed Implicit Gradient Approach »
Prashant Khanduri · Ioannis Tsaknakis · Yihua Zhang · Jia Liu · Sijia Liu · Jiawei Zhang · Mingyi Hong -
2023 Poster: Towards Coherent Image Inpainting Using Denoising Diffusion Implicit Models »
Guanhua Zhang · Jiabao Ji · Yang Zhang · Mo Yu · Tommi Jaakkola · Shiyu Chang -
2023 Poster: Data Efficient Neural Scaling Law via Model Reusing »
Peihao Wang · Rameswar Panda · Zhangyang “Atlas” Wang -
2023 Oral: Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models »
Ajay Jaiswal · Shiwei Liu · Tianlong Chen · Ding · Zhangyang “Atlas” Wang -
2023 Poster: Master-ASR: Achieving Multilingual Scalability and Low-Resource Adaptation in ASR with Modular Learning »
Zhongzhi Yu · Yang Zhang · Kaizhi Qian · Cheng Wan · Yonggan Fu · Yongan Zhang · Yingyan (Celine) Lin -
2023 Poster: Are Large Kernels Better Teachers than Transformers for ConvNets? »
Tianjin Huang · Lu Yin · Zhenyu Zhang · Li Shen · Meng Fang · Mykola Pechenizkiy · Zhangyang “Atlas” Wang · Shiwei Liu -
2023 Poster: Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation »
Wenqing Zheng · S P Sharan · Ajay Jaiswal · Kevin Wang · Yihan Xi · Dejia Xu · Zhangyang “Atlas” Wang -
2023 Poster: Towards Constituting Mathematical Structures for Learning to Optimize »
Jialin Liu · Xiaohan Chen · Zhangyang “Atlas” Wang · Wotao Yin · HanQin Cai -
2023 Poster: PromptBoosting: Black-Box Text Classification with Ten Forward Passes »
Bairu Hou · Joe O'Connor · Jacob Andreas · Shiyu Chang · Yang Zhang -
2023 Poster: Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication »
Ajay Jaiswal · Shiwei Liu · Tianlong Chen · Ding · Zhangyang “Atlas” Wang -
2023 Poster: Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks »
Mohammed Nowaz Rabbani Chowdhury · Shuai Zhang · Meng Wang · Sijia Liu · Pin-Yu Chen -
2023 Poster: Lowering the Pre-training Tax for Gradient-based Subset Training: A Lightweight Distributed Pre-Training Toolkit »
Yeonju Ro · Zhangyang “Atlas” Wang · Vijay Chidambaram · Aditya Akella -
2023 Poster: Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models »
Ajay Jaiswal · Shiwei Liu · Tianlong Chen · Ding · Zhangyang “Atlas” Wang -
2022 : Invited talk #8 Atlas Wang. Title: “Free Knowledge” in Chest X-rays: Contrastive Learning of Images and Their Radiomics »
Zhangyang “Atlas” Wang -
2022 : APP: Anytime Progressive Pruning »
Diganta Misra · Bharat Runwal · Tianlong Chen · Zhangyang “Atlas” Wang · Irina Rish -
2022 Workshop: New Frontiers in Adversarial Machine Learning »
Sijia Liu · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Kathrin Grosse · Hima Lakkaraju · Sanmi Koyejo -
2022 Poster: Learning Stable Classifiers by Transferring Unstable Features »
Yujia Bao · Shiyu Chang · Regina Barzilay -
2022 Poster: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Poster: ContentVec: An Improved Self-Supervised Speech Representation by Disentangling Speakers »
Kaizhi Qian · Yang Zhang · Heting Gao · Junrui Ni · Cheng-I Lai · David Cox · Mark Hasegawa-Johnson · Shiyu Chang -
2022 Spotlight: Learning Stable Classifiers by Transferring Unstable Features »
Yujia Bao · Shiyu Chang · Regina Barzilay -
2022 Spotlight: ContentVec: An Improved Self-Supervised Speech Representation by Disentangling Speakers »
Kaizhi Qian · Yang Zhang · Heting Gao · Junrui Ni · Cheng-I Lai · David Cox · Mark Hasegawa-Johnson · Shiyu Chang -
2022 Spotlight: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Poster: Universality of Winning Tickets: A Renormalization Group Perspective »
William T. Redman · Tianlong Chen · Zhangyang “Atlas” Wang · Akshunna S. Dogra -
2022 Poster: Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling »
Hongkang Li · Meng Wang · Sijia Liu · Pin-Yu Chen · Jinjun Xiong -
2022 Poster: VariGrow: Variational Architecture Growing for Task-Agnostic Continual Learning based on Bayesian Novelty »
Randy Ardywibowo · Zepeng Huo · Zhangyang “Atlas” Wang · Bobak Mortazavi · Shuai Huang · Xiaoning Qian -
2022 Poster: Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition »
Haotao Wang · Aston Zhang · Yi Zhu · Shuai Zheng · Mu Li · Alex Smola · Zhangyang “Atlas” Wang -
2022 Poster: Training Your Sparse Neural Network Better with Any Mask »
Ajay Jaiswal · Haoyu Ma · Tianlong Chen · Ying Ding · Zhangyang “Atlas” Wang -
2022 Poster: Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization »
Yihua Zhang · Guanhua Zhang · Prashant Khanduri · Mingyi Hong · Shiyu Chang · Sijia Liu -
2022 Spotlight: Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling »
Hongkang Li · Meng Wang · Sijia Liu · Pin-Yu Chen · Jinjun Xiong -
2022 Oral: Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition »
Haotao Wang · Aston Zhang · Yi Zhu · Shuai Zheng · Mu Li · Alex Smola · Zhangyang “Atlas” Wang -
2022 Spotlight: Universality of Winning Tickets: A Renormalization Group Perspective »
William T. Redman · Tianlong Chen · Zhangyang “Atlas” Wang · Akshunna S. Dogra -
2022 Spotlight: Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization »
Yihua Zhang · Guanhua Zhang · Prashant Khanduri · Mingyi Hong · Shiyu Chang · Sijia Liu -
2022 Spotlight: Training Your Sparse Neural Network Better with Any Mask »
Ajay Jaiswal · Haoyu Ma · Tianlong Chen · Ying Ding · Zhangyang “Atlas” Wang -
2022 Spotlight: VariGrow: Variational Architecture Growing for Task-Agnostic Continual Learning based on Bayesian Novelty »
Randy Ardywibowo · Zepeng Huo · Zhangyang “Atlas” Wang · Bobak Mortazavi · Shuai Huang · Xiaoning Qian -
2022 Poster: Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets »
Tianlong Chen · Xuxi Chen · Xiaolong Ma · Yanzhi Wang · Zhangyang “Atlas” Wang -
2022 Poster: Removing Batch Normalization Boosts Adversarial Training »
Haotao Wang · Aston Zhang · Shuai Zheng · Xingjian Shi · Mu Li · Zhangyang “Atlas” Wang -
2022 Poster: Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework »
Ching-Yun (Irene) Ko · Jeet Mohapatra · Sijia Liu · Pin-Yu Chen · Luca Daniel · Lily Weng -
2022 Poster: Neural Implicit Dictionary Learning via Mixture-of-Expert Training »
Peihao Wang · Zhiwen Fan · Tianlong Chen · Zhangyang “Atlas” Wang -
2022 Spotlight: Removing Batch Normalization Boosts Adversarial Training »
Haotao Wang · Aston Zhang · Shuai Zheng · Xingjian Shi · Mu Li · Zhangyang “Atlas” Wang -
2022 Spotlight: Neural Implicit Dictionary Learning via Mixture-of-Expert Training »
Peihao Wang · Zhiwen Fan · Tianlong Chen · Zhangyang “Atlas” Wang -
2022 Spotlight: Revisiting Contrastive Learning through the Lens of Neighborhood Component Analysis: an Integrated Framework »
Ching-Yun (Irene) Ko · Jeet Mohapatra · Sijia Liu · Pin-Yu Chen · Luca Daniel · Lily Weng -
2022 Spotlight: Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets »
Tianlong Chen · Xuxi Chen · Xiaolong Ma · Yanzhi Wang · Zhangyang “Atlas” Wang -
2021 Poster: Global Prosody Style Transfer Without Text Transcriptions »
Kaizhi Qian · Yang Zhang · Shiyu Chang · Jinjun Xiong · Chuang Gan · David Cox · Mark Hasegawa-Johnson -
2021 Poster: Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm »
Mingkang Zhu · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Oral: Global Prosody Style Transfer Without Text Transcriptions »
Kaizhi Qian · Yang Zhang · Shiyu Chang · Jinjun Xiong · Chuang Gan · David Cox · Mark Hasegawa-Johnson -
2021 Oral: Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm »
Mingkang Zhu · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Poster: Predict then Interpolate: A Simple Algorithm to Learn Stable Classifiers »
Yujia Bao · Shiyu Chang · Regina Barzilay -
2021 Poster: Graph Contrastive Learning Automated »
Yuning You · Tianlong Chen · Yang Shen · Zhangyang “Atlas” Wang -
2021 Poster: Self-Damaging Contrastive Learning »
Ziyu Jiang · Tianlong Chen · Bobak Mortazavi · Zhangyang “Atlas” Wang -
2021 Oral: Graph Contrastive Learning Automated »
Yuning You · Tianlong Chen · Yang Shen · Zhangyang “Atlas” Wang -
2021 Spotlight: Predict then Interpolate: A Simple Algorithm to Learn Stable Classifiers »
Yujia Bao · Shiyu Chang · Regina Barzilay -
2021 Spotlight: Self-Damaging Contrastive Learning »
Ziyu Jiang · Tianlong Chen · Bobak Mortazavi · Zhangyang “Atlas” Wang -
2021 Poster: Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not? »
Ning Liu · Geng Yuan · Zhengping Che · Xuan Shen · Xiaolong Ma · Qing Jin · Jian Ren · Jian Tang · Sijia Liu · Yanzhi Wang -
2021 Poster: Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators »
Yonggan Fu · Yongan Zhang · Yang Zhang · David Cox · Yingyan Lin -
2021 Spotlight: Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators »
Yonggan Fu · Yongan Zhang · Yang Zhang · David Cox · Yingyan Lin -
2021 Spotlight: Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not? »
Ning Liu · Geng Yuan · Zhengping Che · Xuan Shen · Xiaolong Ma · Qing Jin · Jian Ren · Jian Tang · Sijia Liu · Yanzhi Wang -
2021 Poster: A Unified Lottery Ticket Hypothesis for Graph Neural Networks »
Tianlong Chen · Yongduo Sui · Xuxi Chen · Aston Zhang · Zhangyang “Atlas” Wang -
2021 Poster: Efficient Lottery Ticket Finding: Less Data is More »
Zhenyu Zhang · Xuxi Chen · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Spotlight: Efficient Lottery Ticket Finding: Less Data is More »
Zhenyu Zhang · Xuxi Chen · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Spotlight: A Unified Lottery Ticket Hypothesis for Graph Neural Networks »
Tianlong Chen · Yongduo Sui · Xuxi Chen · Aston Zhang · Zhangyang “Atlas” Wang -
2020 Poster: Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training »
Xuxi Chen · Wuyang Chen · Tianlong Chen · Ye Yuan · Chen Gong · Kewei Chen · Zhangyang “Atlas” Wang -
2020 Poster: When Does Self-Supervision Help Graph Convolutional Networks? »
Yuning You · Tianlong Chen · Zhangyang “Atlas” Wang · Yang Shen -
2020 Poster: Invariant Rationalization »
Shiyu Chang · Yang Zhang · Mo Yu · Tommi Jaakkola -
2020 Poster: Automated Synthetic-to-Real Generalization »
Wuyang Chen · Zhiding Yu · Zhangyang “Atlas” Wang · Anima Anandkumar -
2020 Poster: Proper Network Interpretability Helps Adversarial Robustness in Classification »
Akhilan Boopathy · Sijia Liu · Gaoyuan Zhang · Cynthia Liu · Pin-Yu Chen · Shiyu Chang · Luca Daniel -
2020 Poster: Eliminating the Invariance on the Loss Landscape of Linear Autoencoders »
Reza Oftadeh · Jiayi Shen · Zhangyang “Atlas” Wang · Dylan Shell -
2020 Poster: Unsupervised Speech Decomposition via Triple Information Bottleneck »
Kaizhi Qian · Yang Zhang · Shiyu Chang · Mark Hasegawa-Johnson · David Cox -
2020 Poster: NADS: Neural Architecture Distribution Search for Uncertainty Awareness »
Randy Ardywibowo · Shahin Boluki · Xinyu Gong · Zhangyang “Atlas” Wang · Xiaoning Qian -
2020 Poster: AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks »
Yonggan Fu · Wuyang Chen · Haotao Wang · Haoran Li · Yingyan Lin · Zhangyang “Atlas” Wang -
2019 Poster: AutoVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss »
Kaizhi Qian · Yang Zhang · Shiyu Chang · Xuesong Yang · Mark Hasegawa-Johnson -
2019 Oral: AutoVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss »
Kaizhi Qian · Yang Zhang · Shiyu Chang · Xuesong Yang · Mark Hasegawa-Johnson