Skip to yearly menu bar Skip to main content


Talk

Follow the Moving Leader in Deep Learning

Shuai Zheng · James Kwok

C4.8

Abstract:

Deep networks are highly nonlinear and difficult to optimize. During training, the parameter iterate may move from one local basin to another, or the data distribution may even change. Inspired by the close connection between stochastic optimization and online learning, we propose a variant of the {\em follow the regularized leader} (FTRL) algorithm called {\em follow the moving leader} (FTML). Unlike the FTRL family of algorithms, the recent samples are weighted more heavily in each iteration and so FTML can adapt more quickly to changes. We show that FTML enjoys the nice properties of RMSprop and Adam, while avoiding their pitfalls. Experimental results on a number of deep learning models and tasks demonstrate that FTML converges quickly, and outperforms other state-of-the-art optimizers.

Live content is unavailable. Log in and register to view live content