Timezone: »
Generative adversarial networks (GANs) are effective in generating realistic images but the training is often unstable. There are existing efforts that model the training dynamics of GANs in the parameter space but the analysis cannot directly motivate practically effective stabilizing methods. To this end, we present a conceptually novel perspective from control theory to directly model the dynamics of GANs in the frequency domain and provide simple yet effective methods to stabilize GAN's training. We first analyze the training dynamic of a prototypical Dirac GAN and adopt the widely-used closed-loop control (CLC) to improve its stability. We then extend CLC to stabilize the training dynamic of normal GANs, which can be implemented as an L2 regularizer on the output of the discriminator. Empirical results show that our method can effectively stabilize the training and obtain state-of-the-art performance on data generation tasks.
Author Information
Kun Xu (Tsinghua University)
Chongxuan Li (Tsinghua University)
Jun Zhu (Tsinghua University)
Bo Zhang (Tsinghua University)
More from the Same Authors
-
2021 : Towards Safe Reinforcement Learning via Constraining Conditional Value at Risk »
Chengyang Ying · Xinning Zhou · Dong Yan · Jun Zhu -
2021 : Strategically-timed State-Observation Attacks on Deep Reinforcement Learning Agents »
Xinning Zhou · You Qiaoben · Chengyang Ying · Jun Zhu -
2021 : Adversarial Semantic Contour for Object Detection »
Yichi Zhang · Zijian Zhu · Xiao Yang · Jun Zhu -
2021 : Query-based Adversarial Attacks on Graph with Fake Nodes »
Zhengyi Wang · Zhongkai Hao · Jun Zhu -
2022 Poster: NeuralEF: Deconstructing Kernels by Deep Neural Networks »
Zhijie Deng · Jiaxin Shi · Jun Zhu -
2022 Spotlight: NeuralEF: Deconstructing Kernels by Deep Neural Networks »
Zhijie Deng · Jiaxin Shi · Jun Zhu -
2022 Poster: Robustness and Accuracy Could Be Reconcilable by (Proper) Definition »
Tianyu Pang · Min Lin · Xiao Yang · Jun Zhu · Shuicheng Yan -
2022 Poster: Maximum Likelihood Training for Score-based Diffusion ODEs by High Order Denoising Score Matching »
Cheng Lu · Kaiwen Zheng · Fan Bao · Jianfei Chen · Chongxuan Li · Jun Zhu -
2022 Poster: Fast Lossless Neural Compression with Integer-Only Discrete Flows »
Siyu Wang · Jianfei Chen · Chongxuan Li · Jun Zhu · Bo Zhang -
2022 Spotlight: Fast Lossless Neural Compression with Integer-Only Discrete Flows »
Siyu Wang · Jianfei Chen · Chongxuan Li · Jun Zhu · Bo Zhang -
2022 Spotlight: Maximum Likelihood Training for Score-based Diffusion ODEs by High Order Denoising Score Matching »
Cheng Lu · Kaiwen Zheng · Fan Bao · Jianfei Chen · Chongxuan Li · Jun Zhu -
2022 Spotlight: Robustness and Accuracy Could Be Reconcilable by (Proper) Definition »
Tianyu Pang · Min Lin · Xiao Yang · Jun Zhu · Shuicheng Yan -
2022 Poster: Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models »
Fan Bao · Chongxuan Li · Jiacheng Sun · Jun Zhu · Bo Zhang -
2022 Poster: Thompson Sampling for (Combinatorial) Pure Exploration »
Siwei Wang · Jun Zhu -
2022 Spotlight: Estimating the Optimal Covariance with Imperfect Mean in Diffusion Probabilistic Models »
Fan Bao · Chongxuan Li · Jiacheng Sun · Jun Zhu · Bo Zhang -
2022 Spotlight: Thompson Sampling for (Combinatorial) Pure Exploration »
Siwei Wang · Jun Zhu -
2021 Poster: Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models »
Fan Bao · Kun Xu · Chongxuan Li · Lanqing Hong · Jun Zhu · Bo Zhang -
2021 Spotlight: Variational (Gradient) Estimate of the Score Function in Energy-based Latent Variable Models »
Fan Bao · Kun Xu · Chongxuan Li · Lanqing Hong · Jun Zhu · Bo Zhang -
2020 Poster: Variance Reduction and Quasi-Newton for Particle-Based Variational Inference »
Michael Zhu · Chang Liu · Jun Zhu -
2020 Poster: VFlow: More Expressive Generative Flows with Variational Data Augmentation »
Jianfei Chen · Cheng Lu · Biqi Chenli · Jun Zhu · Tian Tian -
2020 Poster: Nonparametric Score Estimators »
Yuhao Zhou · Jiaxin Shi · Jun Zhu -
2019 Poster: Improving Adversarial Robustness via Promoting Ensemble Diversity »
Tianyu Pang · Kun Xu · Chao Du · Ning Chen · Jun Zhu -
2019 Oral: Improving Adversarial Robustness via Promoting Ensemble Diversity »
Tianyu Pang · Kun Xu · Chao Du · Ning Chen · Jun Zhu -
2018 Poster: Message Passing Stein Variational Gradient Descent »
Jingwei Zhuo · Chang Liu · Jiaxin Shi · Jun Zhu · Ning Chen · Bo Zhang -
2018 Poster: Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors »
Yichi Zhou · Jun Zhu · Jingwei Zhuo -
2018 Oral: Message Passing Stein Variational Gradient Descent »
Jingwei Zhuo · Chang Liu · Jiaxin Shi · Jun Zhu · Ning Chen · Bo Zhang -
2018 Oral: Racing Thompson: an Efficient Algorithm for Thompson Sampling with Non-conjugate Priors »
Yichi Zhou · Jun Zhu · Jingwei Zhuo -
2018 Poster: Max-Mahalanobis Linear Discriminant Analysis Networks »
Tianyu Pang · Chao Du · Jun Zhu -
2018 Poster: Adversarial Attack on Graph Structured Data »
Hanjun Dai · Hui Li · Tian Tian · Xin Huang · Lin Wang · Jun Zhu · Le Song -
2018 Oral: Max-Mahalanobis Linear Discriminant Analysis Networks »
Tianyu Pang · Chao Du · Jun Zhu -
2018 Oral: Adversarial Attack on Graph Structured Data »
Hanjun Dai · Hui Li · Tian Tian · Xin Huang · Lin Wang · Jun Zhu · Le Song -
2018 Poster: Stochastic Training of Graph Convolutional Networks with Variance Reduction »
Jianfei Chen · Jun Zhu · Le Song -
2018 Poster: A Spectral Approach to Gradient Estimation for Implicit Distributions »
Jiaxin Shi · Shengyang Sun · Jun Zhu -
2018 Oral: A Spectral Approach to Gradient Estimation for Implicit Distributions »
Jiaxin Shi · Shengyang Sun · Jun Zhu -
2018 Oral: Stochastic Training of Graph Convolutional Networks with Variance Reduction »
Jianfei Chen · Jun Zhu · Le Song -
2017 Poster: Identify the Nash Equilibrium in Static Games with Random Payoffs »
Yichi Zhou · Jialian Li · Jun Zhu -
2017 Talk: Identify the Nash Equilibrium in Static Games with Random Payoffs »
Yichi Zhou · Jialian Li · Jun Zhu