Timezone: »

 
Poster
Learning-to-Learn Stochastic Gradient Descent with Biased Regularization
Giulia Denevi · Carlo Ciliberto · Riccardo Grazzi · Massimiliano Pontil

Wed Jun 12 06:30 PM -- 09:00 PM (PDT) @ Pacific Ballroom #257

We study the problem of learning-to-learn: infer- ring a learning algorithm that works well on a family of tasks sampled from an unknown distribution. As class of algorithms we consider Stochastic Gradient Descent (SGD) on the true risk regularized by the square euclidean distance from a bias vector. We present an average excess risk bound for such a learning algorithm that quantifies the potential benefit of using a bias vector with respect to the unbiased case. We then propose a novel meta-algorithm to estimate the bias term online from a sequence of observed tasks. The small memory footprint and low time complexity of our approach makes it appealing in practice while our theoretical analysis provides guarantees on the generalization properties of the meta-algorithm on new tasks. A key feature of our results is that, when the number of tasks grows and their vari- ance is relatively small, our learning-to-learn approach has a significant advantage over learning each task in isolation by standard SGD without a bias term. Numerical experiments demonstrate the effectiveness of our approach in practice.

Author Information

Giulia Denevi (IIT)
Carlo Ciliberto (Imperial College London)
Riccardo Grazzi (Istituto Italiano di Tecnologia - University College London)
Massimiliano Pontil (University College London)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors