Parameter-free Online Optimization

Francesco Orabona, Ashok Cutkosky

Mon 13 Jul 5 a.m. — 8 a.m. PDT [ Join Zoom ]
Mon 13 Jul 3 p.m. — 6 p.m. PDT [ Join Zoom ]

Please do not share or post zoom links
[ Video Part 1 [ Video Part 2 [ Video Part 3 [ Video Part 4

The videos for each part of this tutorial are linked above. The SlidesLive embed below is the livestream of the entire day including the Q&A.


Classical stochastic optimization results typically assume known values for various properties of the data (e.g. Lipschitz constants, distance to an optimal point, smoothness or strong-convexity constants). Unfortunately, in practice these values are unknown, necessitating a long trial-and-error procedure to find the best parameters. To address this issue, in recent years a number of parameter-free algorithms have been developed for online optimization and for online learning. Parameter-free algorithms make no assumptions about the properties of the data and yet nevertheless converge just as fast as the optimally tuned algorithm. This is an exciting line of work that has now reached enough maturity to be taught to general audiences. Indeed,these algorithms have not received a proper introduction to the machine learning community and only a handful of people fully understand them. This tutorial aims at bridging this gap, presenting practice and theory for using and designing parameter-free algorithms. We will present the latest advancements in this field, including practical applications.

Chat is not available.