Timezone: »
Adversarial Training (AT) is known as an effective approach to enhance the robustness of deep neural networks. Recently researchers notice that robust models with AT have good generative ability and can synthesize realistic images, while the reason behind it is yet under-explored. In this paper, we demystify this phenomenon by developing a unified probabilistic framework, called Contrastive Energy-based Models (CEM). On the one hand, we provide the first probabilistic characterization of AT through a unified understanding of robustness and generative ability. On the other hand, our CEM can also naturally generalize AT to the unsupervised scenario and develop principled unsupervised AT methods. Based on these, we propose principled adversarial sampling algorithms in both supervised and unsupervised scenarios. Experiments show our sampling algorithms significantly improve the sampling quality and achieves an Inception score of 9.61 on CIFAR-10, which is superior to previous energy-based models and comparable to state-of-the-art generative models.
Author Information
Yisen Wang (Peking University)
Jiansheng Yang
Zhouchen Lin (Peking University)
Yifei Wang (Peking University)
More from the Same Authors
-
2021 : Adversarial Interaction Attacks: Fooling AI to Misinterpret Human Intentions »
Nodens Koren · Xingjun Ma · Qiuhong Ke · Yisen Wang · James Bailey -
2023 Poster: On the Generalization of Multi-modal Contrastive Learning »
Qi Zhang · Yifei Wang · Yisen Wang -
2023 Poster: Rethinking Weak Supervision in Helping Contrastive Learning »
Jingyi Cui · Weiran Huang · Yifei Wang · Yisen Wang -
2022 Poster: PDO-s3DCNNs: Partial Differential Operator Based Steerable 3D CNNs »
Zhengyang Shen · Tao Hong · Qi She · Jinwen Ma · Zhouchen Lin -
2022 Spotlight: PDO-s3DCNNs: Partial Differential Operator Based Steerable 3D CNNs »
Zhengyang Shen · Tao Hong · Qi She · Jinwen Ma · Zhouchen Lin -
2022 Poster: Certified Adversarial Robustness Under the Bounded Support Set »
Yiwen Kou · Qinyuan Zheng · Yisen Wang -
2022 Poster: Kill a Bird with Two Stones: Closing the Convergence Gaps in Non-Strongly Convex Optimization by Directly Accelerated SVRG with Double Compensation and Snapshots »
Yuanyuan Liu · Fanhua Shang · Weixin An · Hongying Liu · Zhouchen Lin -
2022 Spotlight: Certified Adversarial Robustness Under the Bounded Support Set »
Yiwen Kou · Qinyuan Zheng · Yisen Wang -
2022 Spotlight: Kill a Bird with Two Stones: Closing the Convergence Gaps in Non-Strongly Convex Optimization by Directly Accelerated SVRG with Double Compensation and Snapshots »
Yuanyuan Liu · Fanhua Shang · Weixin An · Hongying Liu · Zhouchen Lin -
2022 Poster: Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the $O(\epsilon^{-7/4})$ Complexity »
Huan Li · Zhouchen Lin -
2022 Poster: CerDEQ: Certifiable Deep Equilibrium Model »
Mingjie Li · Yisen Wang · Zhouchen Lin -
2022 Poster: G$^2$CN: Graph Gaussian Convolution Networks with Concentrated Graph Filters »
Mingjie Li · Xiaojun Guo · Yifei Wang · Yisen Wang · Zhouchen Lin -
2022 Poster: Optimization-Induced Graph Implicit Nonlinear Diffusion »
Qi Chen · Yifei Wang · Yisen Wang · Jiansheng Yang · Zhouchen Lin -
2022 Spotlight: Restarted Nonconvex Accelerated Gradient Descent: No More Polylogarithmic Factor in the $O(\epsilon^{-7/4})$ Complexity »
Huan Li · Zhouchen Lin -
2022 Spotlight: CerDEQ: Certifiable Deep Equilibrium Model »
Mingjie Li · Yisen Wang · Zhouchen Lin -
2022 Spotlight: Optimization-Induced Graph Implicit Nonlinear Diffusion »
Qi Chen · Yifei Wang · Yisen Wang · Jiansheng Yang · Zhouchen Lin -
2022 Spotlight: G$^2$CN: Graph Gaussian Convolution Networks with Concentrated Graph Filters »
Mingjie Li · Xiaojun Guo · Yifei Wang · Yisen Wang · Zhouchen Lin -
2021 : Discussion Panel #1 »
Hang Su · Matthias Hein · Liwei Wang · Sven Gowal · Jan Hendrik Metzen · Henry Liu · Yisen Wang -
2021 Poster: GBHT: Gradient Boosting Histogram Transform for Density Estimation »
Jingyi Cui · Hanyuan Hang · Yisen Wang · Zhouchen Lin -
2021 Poster: Leveraged Weighted Loss for Partial Label Learning »
Hongwei Wen · Jingyi Cui · Hanyuan Hang · Jiabin Liu · Yisen Wang · Zhouchen Lin -
2021 Spotlight: GBHT: Gradient Boosting Histogram Transform for Density Estimation »
Jingyi Cui · Hanyuan Hang · Yisen Wang · Zhouchen Lin -
2021 Oral: Leveraged Weighted Loss for Partial Label Learning »
Hongwei Wen · Jingyi Cui · Hanyuan Hang · Jiabin Liu · Yisen Wang · Zhouchen Lin -
2021 Poster: Can Subnetwork Structure Be the Key to Out-of-Distribution Generalization? »
Dinghuai Zhang · Kartik Ahuja · Yilun Xu · Yisen Wang · Aaron Courville -
2021 Oral: Can Subnetwork Structure Be the Key to Out-of-Distribution Generalization? »
Dinghuai Zhang · Kartik Ahuja · Yilun Xu · Yisen Wang · Aaron Courville -
2021 Poster: Uncertainty Principles of Encoding GANs »
Ruili Feng · Zhouchen Lin · Jiapeng Zhu · Deli Zhao · Jingren Zhou · Zheng-Jun Zha -
2021 Spotlight: Uncertainty Principles of Encoding GANs »
Ruili Feng · Zhouchen Lin · Jiapeng Zhu · Deli Zhao · Jingren Zhou · Zheng-Jun Zha -
2020 Poster: PDO-eConvs: Partial Differential Operator Based Equivariant Convolutions »
Zhengyang Shen · Lingshen He · Zhouchen Lin · Jinwen Ma -
2020 Poster: Boosted Histogram Transform for Regression »
Yuchao Cai · Hanyuan Hang · Hanfang Yang · Zhouchen Lin -
2020 Poster: Implicit Euler Skip Connections: Enhancing Adversarial Robustness via Numerical Stability »
Mingjie Li · Lingshen He · Zhouchen Lin -
2020 Poster: Maximum-and-Concatenation Networks »
Xingyu Xie · Hao Kong · Jianlong Wu · Wayne Zhang · Guangcan Liu · Zhouchen Lin -
2019 Poster: Differentiable Linearized ADMM »
Xingyu Xie · Jianlong Wu · Guangcan Liu · Zhisheng Zhong · Zhouchen Lin -
2019 Oral: Differentiable Linearized ADMM »
Xingyu Xie · Jianlong Wu · Guangcan Liu · Zhisheng Zhong · Zhouchen Lin