Timezone: »

Plug-and-Play Methods Provably Converge with Properly Trained Denoisers
Ernest Ryu · Jialin Liu · Sicheng Wang · Xiaohan Chen · Zhangyang Wang · Wotao Yin

Wed Jun 12 06:30 PM -- 09:00 PM (PDT) @ Pacific Ballroom #107

Plug-and-play (PnP) is a non-convex framework that integrates modern denoising priors, such as BM3D or deep learning-based denoisers, into ADMM or other proximal algorithms. An advantage of PnP is that one can use pre-trained denoisers when there is not sufficient data for end-to-end training. Although PnP has been recently studied extensively with great empirical success, theoretical analysis addressing even the most basic question of convergence has been insufficient. In this paper, we theoretically establish convergence of PnP-FBS and PnP-ADMM, without using diminishing stepsizes, under a certain Lipschitz condition on the denoisers. We then propose real spectral normalization, a technique for training deep learning-based denoisers to satisfy the proposed Lipschitz condition. Finally, we present experimental results validating the theory.

Author Information

Ernest Ryu (University of California, Los Angeles)
Jialin Liu (University of California, Los Angeles (UCLA))
Sicheng Wang (TAMU)
Xiaohan Chen (Texas A&M University)
Zhangyang Wang (Texas A&M University)
Wotao Yin (Alibaba US)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors