Timezone: »
We introduce a framework based on bilevel programming that unifies gradient-based hyperparameter optimization and meta-learning. We show that an approximate version of the bilevel problem can be solved by taking into explicit account the optimization dynamics for the inner objective. Depending on the specific setting, the outer variables take either the meaning of hyperparameters in a supervised learning problem or parameters of a meta-learner.We provide sufficient conditions under which solutions of the approximate problem converge to those of the exact problem. We instantiate our approach for meta-learning in the case of deep learning where representation layers are treated as hyperparameters shared across a set of training episodes. In experiments, we confirm our theoretical findings, present encouraging results for few-shot learning and contrast the bilevel approach against classical approaches for learning-to-learn.
Author Information
Luca Franceschi (Istituto Italiano di Tecnologia - University College London)
Paolo Frasconi (University of Florence)
Saverio Salzo (Istituto Italiano di Tecnologia)
Riccardo Grazzi (Istituto Italiano di Tecnologia)
Massimiliano Pontil (University College London)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Bilevel Programming for Hyperparameter Optimization and Meta-Learning »
Wed. Jul 11th 04:15 -- 07:00 PM Room Hall B #67
More from the Same Authors
-
2022 Poster: Bregman Neural Networks »
Jordan Frecon · Gilles Gasso · Massimiliano Pontil · Saverio Salzo -
2022 Spotlight: Bregman Neural Networks »
Jordan Frecon · Gilles Gasso · Massimiliano Pontil · Saverio Salzo -
2022 Poster: Batch Greenkhorn Algorithm for Entropic-Regularized Multimarginal Optimal Transport: Linear Rate of Convergence and Iteration Complexity »
Vladimir Kostic · Saverio Salzo · Massimiliano Pontil -
2022 Spotlight: Batch Greenkhorn Algorithm for Entropic-Regularized Multimarginal Optimal Transport: Linear Rate of Convergence and Iteration Complexity »
Vladimir Kostic · Saverio Salzo · Massimiliano Pontil -
2020 Poster: On the Iteration Complexity of Hypergradient Computation »
Riccardo Grazzi · Luca Franceschi · Massimiliano Pontil · Saverio Salzo -
2019 Poster: Learning-to-Learn Stochastic Gradient Descent with Biased Regularization »
Giulia Denevi · Carlo Ciliberto · Riccardo Grazzi · Massimiliano Pontil -
2019 Poster: Learning Discrete Structures for Graph Neural Networks »
Luca Franceschi · Mathias Niepert · Massimiliano Pontil · Xiao He -
2019 Oral: Learning Discrete Structures for Graph Neural Networks »
Luca Franceschi · Mathias Niepert · Massimiliano Pontil · Xiao He -
2019 Oral: Learning-to-Learn Stochastic Gradient Descent with Biased Regularization »
Giulia Denevi · Carlo Ciliberto · Riccardo Grazzi · Massimiliano Pontil -
2017 Poster: Forward and Reverse Gradient-Based Hyperparameter Optimization »
Luca Franceschi · Michele Donini · Paolo Frasconi · Massimiliano Pontil -
2017 Talk: Forward and Reverse Gradient-Based Hyperparameter Optimization »
Luca Franceschi · Michele Donini · Paolo Frasconi · Massimiliano Pontil