Timezone: »

Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization
Umut Simsekli · Cagatay Yildiz · Thanh Huy Nguyen · Ali Taylan Cemgil · Gaël RICHARD

Thu Jul 12 09:15 AM -- 12:00 PM (PDT) @ Hall B #82
Recent studies have illustrated that stochastic gradient Markov Chain Monte Carlo techniques have a strong potential in non-convex optimization, where local and global convergence guarantees can be shown under certain conditions. By building up on this recent theory, in this study, we develop an asynchronous-parallel stochastic L-BFGS algorithm for non-convex optimization. The proposed algorithm is suitable for both distributed and shared-memory settings. We provide formal theoretical analysis and show that the proposed method achieves an ergodic convergence rate of ${\cal O}(1/\sqrt{N})$ ($N$ being the total number of iterations) and it can achieve a linear speedup under certain conditions. We perform several experiments on both synthetic and real datasets. The results support our theory and show that the proposed algorithm provides a significant speedup over the recently proposed synchronous distributed L-BFGS algorithm.

Author Information

Umut Simsekli (Telecom ParisTech)
Cagatay Yildiz (Aalto University)
Thanh Huy Nguyen (Telecom ParisTech)
Ali Taylan Cemgil (DeepMind)
Gaël RICHARD (Télécom ParisTech)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors