Oral
Plug-and-Play Methods Provably Converge with Properly Trained Denoisers
Ernest Ryu · Jialin Liu · Sicheng Wang · Xiaohan Chen · Zhangyang Wang · Wotao Yin

Wed Jun 12th 03:00 -- 03:05 PM @ Room 104

Plug-and-play (PnP) is a non-convex framework that integrates modern denoising priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms. An advantage of PnP is that one can use pre-trained denoisers when there is not sufficient data for end-to-end training. Although PnP has been recently studied extensively and has exhibited great empirical results, theoretical analysis addressing even the most basic question of convergence has been insufficient. In this paper, we theoretically establish convergence of PnP-FBS and PnP-ADMM, without using diminishing stepsizes, under a certain Lipschitz condition on the denoisers. We then propose a technique, which we call real spectral normalization, to train deep learning-based denoisers that satisfy the proposed Lipschitz condition. Finally, we present experimental results that validate the theory.

Author Information

Ernest Ryu (University of California, Los Angeles)
Jialin Liu (University of California, Los Angeles (UCLA))
Sicheng Wang (TAMU)
Xiaohan Chen (Texas A&M University)
Zhangyang Wang (Texas A&M University)
Wotao Yin (Alibaba US)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors