Timezone: »
We study the interplay between surrogate methods for structured prediction and techniques from multitask learning designed to leverage relationships between surrogate outputs.
We propose an efficient algorithm based on trace norm regularization which, differently from previous methods, does not require explicit knowledge of the coding/decoding functions of the surrogate framework.
As a result, our algorithm can be applied to the broad class of problems in which the surrogate space is large or even infinite dimensional. We study excess risk bounds for trace norm regularized structured prediction proving the consistency and learning rates for our estimator. We also identify relevant regimes in which our approach can enjoy better generalization performance than previous methods.
Numerical experiments on ranking problems indicate that enforcing low-rank relations among surrogate outputs may indeed provide a significant advantage in practice.
Author Information
Giulia Luise (University College London)
Dimitrios Stamos (University College London)
Massimiliano Pontil (Istituto Italiano di Tecnologia and University College London)
-
Carlo Ciliberto (Imperial College London)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Leveraging Low-Rank Relations Between Surrogate Tasks in Structured Prediction »
Wed. Jun 12th 01:30 -- 04:00 AM Room Pacific Ballroom #202
More from the Same Authors
-
2022 Poster: Measuring dissimilarity with diffeomorphism invariance »
Théophile Cantelobre · Carlo Ciliberto · Benjamin Guedj · Alessandro Rudi -
2022 Poster: Distribution Regression with Sliced Wasserstein Kernels »
Dimitri Marie Meunier · Massimiliano Pontil · Carlo Ciliberto -
2022 Spotlight: Distribution Regression with Sliced Wasserstein Kernels »
Dimitri Marie Meunier · Massimiliano Pontil · Carlo Ciliberto -
2022 Spotlight: Measuring dissimilarity with diffeomorphism invariance »
Théophile Cantelobre · Carlo Ciliberto · Benjamin Guedj · Alessandro Rudi -
2021 Poster: Best Model Identification: A Rested Bandit Formulation »
Leonardo Cella · Massimiliano Pontil · Claudio Gentile -
2021 Spotlight: Best Model Identification: A Rested Bandit Formulation »
Leonardo Cella · Massimiliano Pontil · Claudio Gentile -
2020 Poster: On the Iteration Complexity of Hypergradient Computation »
Riccardo Grazzi · Luca Franceschi · Massimiliano Pontil · Saverio Salzo -
2020 Poster: Meta-learning with Stochastic Linear Bandits »
Leonardo Cella · Alessandro Lazaric · Massimiliano Pontil -
2019 Poster: Learning-to-Learn Stochastic Gradient Descent with Biased Regularization »
Giulia Denevi · Carlo Ciliberto · Riccardo Grazzi · Massimiliano Pontil -
2019 Poster: Learning Discrete Structures for Graph Neural Networks »
Luca Franceschi · Mathias Niepert · Massimiliano Pontil · Xiao He -
2019 Oral: Learning Discrete Structures for Graph Neural Networks »
Luca Franceschi · Mathias Niepert · Massimiliano Pontil · Xiao He -
2019 Oral: Learning-to-Learn Stochastic Gradient Descent with Biased Regularization »
Giulia Denevi · Carlo Ciliberto · Riccardo Grazzi · Massimiliano Pontil -
2019 Poster: Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation »
Ruohan Wang · Carlo Ciliberto · Pierluigi Vito Amadori · Yiannis Demiris -
2019 Oral: Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation »
Ruohan Wang · Carlo Ciliberto · Pierluigi Vito Amadori · Yiannis Demiris