Timezone: »

Global Convergence of Block Coordinate Descent in Deep Learning
Jinshan ZENG · Tsz Kit Lau · Shaobo Lin · Yuan Yao

Thu Jun 13 06:30 PM -- 09:00 PM (PDT) @ Pacific Ballroom #78
Deep learning has aroused extensive attention due to its great empirical success. The efficiency of the block coordinate descent (BCD) methods has been recently demonstrated in deep neural network (DNN) training. However, theoretical studies on their convergence properties are limited due to the highly nonconvex nature of DNN training. In this paper, we aim at providing a general methodology for provable convergence guarantees for this type of methods. In particular, for most of the commonly used DNN training models involving both two- and three-splitting schemes, we establish the global convergence to a critical point at a rate of ${\cal O}(1/k)$, where $k$ is the number of iterations. The results extend to general loss functions which have Lipschitz continuous gradients and deep residual networks (ResNets). Our key development adds several new elements to the Kurdyka-Lojasiewicz inequality framework that enables us to carry out the global convergence analysis of BCD in the general scenario of deep learning.

Author Information

Jinshan ZENG (Hongkong University of Science and Technology)
Tsz Kit Lau (Northwestern University)
Shaobo Lin (Wenzhou University)
Yuan Yao (HongKong University of Science and Technology)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors