Skip to yearly menu bar Skip to main content


Tutorial

Parameter-free Online Optimization

Francesco Orabona · Ashok Cutkosky


Abstract:

Classical stochastic optimization results typically assume known values for various properties of the data (e.g. Lipschitz constants, distance to an optimal point, smoothness or strong-convexity constants). Unfortunately, in practice these values are unknown, necessitating a long trial-and-error procedure to find the best parameters. To address this issue, in recent years a number of parameter-free algorithms have been developed for online optimization and for online learning. Parameter-free algorithms make no assumptions about the properties of the data and yet nevertheless converge just as fast as the optimally tuned algorithm. This is an exciting line of work that has now reached enough maturity to be taught to general audiences. Indeed,these algorithms have not received a proper introduction to the machine learning community and only a handful of people fully understand them. This tutorial aims at bridging this gap, presenting practice and theory for using and designing parameter-free algorithms. We will present the latest advancements in this field, including practical applications.

Chat is not available.