Guided Learning of Nonconvex Models through Successive Functional Gradient Optimization

Rie Johnson, Tong Zhang,

Abstract Paper

Please do not share or post zoom links

Abstract:

This paper presents a framework of successive functional gradient optimization for training nonconvex models such as neural networks, where training is driven by mirror descent in a function space. We provide a theoretical analysis and empirical study of the training method derived from this framework. It is shown that the method leads to better performance than that of standard training techniques.

Chat is not available.