Timezone: »
The compression of Generative Adversarial Networks (GANs) has lately drawn attention, due to the increasing demand for deploying GANs into mobile devices for numerous applications such as image translation, enhancement and editing. However, compared to the substantial efforts to compressing other deep models, the research on compressing GANs (usually the generators) remains at its infancy stage. Existing GAN compression algorithms are limited to handling specific GAN architectures and losses. Inspired by the recent success of AutoML in deep compression, we introduce AutoML to GAN compression and develop an AutoGAN-Distiller (AGD) framework. Starting with a specifically designed efficient search space, AGD performs an end-to-end discovery for new efficient generators, given the target computational resource constraints. The search is guided by the original GAN model via knowledge distillation, therefore fulfilling the compression. AGD is fully automatic, standalone (i.e., needing no trained discriminators), and generically applicable to various GAN models. We evaluate AGD in two representative GAN tasks: image translation and super resolution. Without bells and whistles, AGD yields remarkably lightweight yet more competitive compressed models, that largely outperform existing alternatives. Our codes and pretrained models are available at: https://github.com/TAMU-VITA/AGD.
Author Information
Yonggan Fu (Rice University)
Wuyang Chen (Texas A&M University)
Haotao Wang (Texas A&M University)
Haoran Li (Rice University)
Yingyan Lin (Rice University)
Zhangyang “Atlas” Wang (University of Texas at Austin)
More from the Same Authors
-
2022 : Invited talk #8 Atlas Wang. Title: “Free Knowledge” in Chest X-rays: Contrastive Learning of Images and Their Radiomics »
Zhangyang “Atlas” Wang -
2022 : APP: Anytime Progressive Pruning »
Diganta Misra · Bharat Runwal · Tianlong Chen · Zhangyang “Atlas” Wang · Irina Rish -
2022 Poster: Data-Efficient Double-Win Lottery Tickets from Robust Pre-training »
Tianlong Chen · Zhenyu Zhang · Sijia Liu · Yang Zhang · Shiyu Chang · Zhangyang “Atlas” Wang -
2022 Poster: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Spotlight: Data-Efficient Double-Win Lottery Tickets from Robust Pre-training »
Tianlong Chen · Zhenyu Zhang · Sijia Liu · Yang Zhang · Shiyu Chang · Zhangyang “Atlas” Wang -
2022 Spotlight: Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness »
Tianlong Chen · Huan Zhang · Zhenyu Zhang · Shiyu Chang · Sijia Liu · Pin-Yu Chen · Zhangyang “Atlas” Wang -
2022 Poster: Universality of Winning Tickets: A Renormalization Group Perspective »
William T. Redman · Tianlong Chen · Zhangyang “Atlas” Wang · Akshunna S. Dogra -
2022 Poster: VariGrow: Variational Architecture Growing for Task-Agnostic Continual Learning based on Bayesian Novelty »
Randy Ardywibowo · Zepeng Huo · Zhangyang “Atlas” Wang · Bobak Mortazavi · Shuai Huang · Xiaoning Qian -
2022 Poster: Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition »
Haotao Wang · Aston Zhang · Yi Zhu · Shuai Zheng · Mu Li · Alex Smola · Zhangyang “Atlas” Wang -
2022 Poster: Training Your Sparse Neural Network Better with Any Mask »
Ajay Jaiswal · Haoyu Ma · Tianlong Chen · Ying Ding · Zhangyang “Atlas” Wang -
2022 Oral: Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition »
Haotao Wang · Aston Zhang · Yi Zhu · Shuai Zheng · Mu Li · Alex Smola · Zhangyang “Atlas” Wang -
2022 Spotlight: Universality of Winning Tickets: A Renormalization Group Perspective »
William T. Redman · Tianlong Chen · Zhangyang “Atlas” Wang · Akshunna S. Dogra -
2022 Spotlight: Training Your Sparse Neural Network Better with Any Mask »
Ajay Jaiswal · Haoyu Ma · Tianlong Chen · Ying Ding · Zhangyang “Atlas” Wang -
2022 Spotlight: VariGrow: Variational Architecture Growing for Task-Agnostic Continual Learning based on Bayesian Novelty »
Randy Ardywibowo · Zepeng Huo · Zhangyang “Atlas” Wang · Bobak Mortazavi · Shuai Huang · Xiaoning Qian -
2022 Poster: Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets »
Tianlong Chen · Xuxi Chen · Xiaolong Ma · Yanzhi Wang · Zhangyang “Atlas” Wang -
2022 Poster: ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks »
Haoran You · Baopu Li · Shi Huihong · Yonggan Fu · Yingyan Lin -
2022 Poster: Removing Batch Normalization Boosts Adversarial Training »
Haotao Wang · Aston Zhang · Shuai Zheng · Xingjian Shi · Mu Li · Zhangyang “Atlas” Wang -
2022 Poster: DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks »
Yonggan Fu · Haichuan Yang · Jiayi Yuan · Meng Li · Cheng Wan · Raghuraman Krishnamoorthi · Vikas Chandra · Yingyan Lin -
2022 Poster: Neural Implicit Dictionary Learning via Mixture-of-Expert Training »
Peihao Wang · Zhiwen Fan · Tianlong Chen · Zhangyang “Atlas” Wang -
2022 Spotlight: Removing Batch Normalization Boosts Adversarial Training »
Haotao Wang · Aston Zhang · Shuai Zheng · Xingjian Shi · Mu Li · Zhangyang “Atlas” Wang -
2022 Spotlight: Neural Implicit Dictionary Learning via Mixture-of-Expert Training »
Peihao Wang · Zhiwen Fan · Tianlong Chen · Zhangyang “Atlas” Wang -
2022 Spotlight: DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks »
Yonggan Fu · Haichuan Yang · Jiayi Yuan · Meng Li · Cheng Wan · Raghuraman Krishnamoorthi · Vikas Chandra · Yingyan Lin -
2022 Spotlight: ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks »
Haoran You · Baopu Li · Shi Huihong · Yonggan Fu · Yingyan Lin -
2022 Spotlight: Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets »
Tianlong Chen · Xuxi Chen · Xiaolong Ma · Yanzhi Wang · Zhangyang “Atlas” Wang -
2021 Poster: Double-Win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference »
Yonggan Fu · Qixuan Yu · Meng Li · Vikas Chandra · Yingyan Lin -
2021 Poster: Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm »
Mingkang Zhu · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Spotlight: Double-Win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference »
Yonggan Fu · Qixuan Yu · Meng Li · Vikas Chandra · Yingyan Lin -
2021 Oral: Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm »
Mingkang Zhu · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Poster: Graph Contrastive Learning Automated »
Yuning You · Tianlong Chen · Yang Shen · Zhangyang “Atlas” Wang -
2021 Poster: Self-Damaging Contrastive Learning »
Ziyu Jiang · Tianlong Chen · Bobak Mortazavi · Zhangyang “Atlas” Wang -
2021 Oral: Graph Contrastive Learning Automated »
Yuning You · Tianlong Chen · Yang Shen · Zhangyang “Atlas” Wang -
2021 Spotlight: Self-Damaging Contrastive Learning »
Ziyu Jiang · Tianlong Chen · Bobak Mortazavi · Zhangyang “Atlas” Wang -
2021 Poster: Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators »
Yonggan Fu · Yongan Zhang · Yang Zhang · David Cox · Yingyan Lin -
2021 Spotlight: Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators »
Yonggan Fu · Yongan Zhang · Yang Zhang · David Cox · Yingyan Lin -
2021 Poster: A Unified Lottery Ticket Hypothesis for Graph Neural Networks »
Tianlong Chen · Yongduo Sui · Xuxi Chen · Aston Zhang · Zhangyang “Atlas” Wang -
2021 Poster: Efficient Lottery Ticket Finding: Less Data is More »
Zhenyu Zhang · Xuxi Chen · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Spotlight: Efficient Lottery Ticket Finding: Less Data is More »
Zhenyu Zhang · Xuxi Chen · Tianlong Chen · Zhangyang “Atlas” Wang -
2021 Spotlight: A Unified Lottery Ticket Hypothesis for Graph Neural Networks »
Tianlong Chen · Yongduo Sui · Xuxi Chen · Aston Zhang · Zhangyang “Atlas” Wang -
2020 Poster: Self-PU: Self Boosted and Calibrated Positive-Unlabeled Training »
Xuxi Chen · Wuyang Chen · Tianlong Chen · Ye Yuan · Chen Gong · Kewei Chen · Zhangyang “Atlas” Wang -
2020 Poster: When Does Self-Supervision Help Graph Convolutional Networks? »
Yuning You · Tianlong Chen · Zhangyang “Atlas” Wang · Yang Shen -
2020 Poster: Automated Synthetic-to-Real Generalization »
Wuyang Chen · Zhiding Yu · Zhangyang “Atlas” Wang · Anima Anandkumar -
2020 Poster: Eliminating the Invariance on the Loss Landscape of Linear Autoencoders »
Reza Oftadeh · Jiayi Shen · Zhangyang “Atlas” Wang · Dylan Shell -
2020 Poster: NADS: Neural Architecture Distribution Search for Uncertainty Awareness »
Randy Ardywibowo · Shahin Boluki · Xinyu Gong · Zhangyang “Atlas” Wang · Xiaoning Qian -
2019 Workshop: Joint Workshop on On-Device Machine Learning & Compact Deep Neural Network Representations (ODML-CDNNR) »
Sujith Ravi · Zornitsa Kozareva · Lixin Fan · Max Welling · Yurong Chen · Werner Bailer · Brian Kulis · Haoji Hu · Jonathan Dekhtiar · Yingyan Lin · Diana Marculescu -
2018 Poster: Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions »
Junru Wu · Yue Wang · Zhenyu Wu · Zhangyang Wang · Ashok Veeraraghavan · Yingyan Lin -
2018 Oral: Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions »
Junru Wu · Yue Wang · Zhenyu Wu · Zhangyang Wang · Ashok Veeraraghavan · Yingyan Lin