Timezone: »

Poster
Scaling up Universal Methods for Convex Optimization
Kimon Antonakopoulos · Dong Quan Vu · Volkan Cevher · Kfir Levy · Panayotis Mertikopoulos

Thu Jul 21 03:00 PM -- 05:00 PM (PDT) @ Hall E #1212
Universal methods achieve optimal convergence rate guarantees in convex optimization without any prior knowledge of the problem's regularity parameters or the attributes of the gradient oracle employed by the method. In this regard, existing state-of-the-art algorithms achieve an $O(1/T^2)$ convergence rate in Lipschitz smooth problems with a perfect gradient oracle, and an $O(1/sqrt{T})$ convergence speed when the underlying problem is non-smooth and/or the gradient oracle is stochastic. On the downside, these methods do not take into account the dependence of these guarantees on the problem's dimensionality, and this can have a catastrophic impact on a method's convergence, in both theory and practice. Our paper aims to bridge this gap by providing a scalable universal method - dubbed UnDERGrad - which enjoys an almost dimension-free oracle complexity in problems with a favorable geometry (like the simplex, $\ell_1$-ball or trace-constraints), while retaining the order-optimal dependence on T described above. These "best of both worlds" guarantees are achieved via a primal-dual update scheme inspired by the dual exploration method for variational inequalities.

#### Author Information

##### Dong Quan Vu (Laboratoire d'informatique de Grenoble - LIG)

I work at a research engineer at the Mathematics and Algorithms for Temporal Data (MATD) Lab at Safran Tech, France. My main research area is Artificial Intelligent. My current focus is on Bayesian Learning for prognostics and dianosgtics of gas turbines. Beside this, my research interests also lie at the intersection of Online Learning, Optimization and Game Theory with the scope of applications aiming towards Operational Research, Resource Management and Multi-agent Systems