Timezone: »
Plug-and-play (PnP) is a non-convex framework that integrates modern denoising priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms. An advantage of PnP is that one can use pre-trained denoisers when there is not sufficient data for end-to-end training. Although PnP has been recently studied extensively with great empirical success, theoretical analysis addressing even the most basic question of convergence has been insufficient. In this paper, we theoretically establish convergence of PnP-FBS and PnP-ADMM, without using diminishing stepsizes, under a certain Lipschitz condition on the denoisers. We then propose real spectral normalization, a technique for training deep learning-based denoisers to satisfy the proposed Lipschitz condition. Finally, we present experimental results validating the theory.
Author Information
Ernest Ryu (University of California, Los Angeles)
Jialin Liu (University of California, Los Angeles (UCLA))
Sicheng Wang (TAMU)
Xiaohan Chen (Texas A&M University)
Zhangyang Wang (Texas A&M University)
Wotao Yin (Alibaba US)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Plug-and-Play Methods Provably Converge with Properly Trained Denoisers »
Wed Jun 12th 10:00 -- 10:05 PM Room Room 104
More from the Same Authors
-
2019 Poster: Acceleration of SVRG and Katyusha X by Inexact Preconditioning »
Yanli Liu · Fei Feng · Wotao Yin -
2019 Oral: Acceleration of SVRG and Katyusha X by Inexact Preconditioning »
Yanli Liu · Fei Feng · Wotao Yin -
2018 Poster: Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions »
Junru Wu · Yue Wang · Zhenyu Wu · Zhangyang Wang · Ashok Veeraraghavan · Yingyan Lin -
2018 Oral: Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions »
Junru Wu · Yue Wang · Zhenyu Wu · Zhangyang Wang · Ashok Veeraraghavan · Yingyan Lin