Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Poster
Tue Aug 08 01:30 AM -- 05:00 AM (PDT) @ Gallery #131
Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization
Qunwei Li · Yi Zhou · Yingbin Liang · Pramod K Varshney

In this work, we investigate the accelerated proximal gradient method for nonconvex programming (APGnc). The method compares between a usual proximal gradient step and a linear extrapolation step, and accepts the one that has a lower function value to achieve a monotonic decrease. In specific, under a general nonsmooth and nonconvex setting, we provide a rigorous argument to show that the limit points of the sequence generated by APGnc are critical points of the objective function. Then, by exploiting the Kurdyka-Lojasiewicz (KL) property for a broad class of functions, we establish the linear and sub-linear convergence rates of the function value sequence generated by APGnc. We further propose a stochastic variance reduced APGnc (SVRG-APGnc), and establish its linear convergence under a special case of the KL property. We also extend the analysis to the inexact version of these methods and develop an adaptive momentum strategy that improves the numerical performance.