Timezone: »
We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm such as stochastic gradient descent. These procedures mirror two ways of computing gradients for recurrent neural networks and have different trade-offs in terms of running time and space requirements. Our formulation of the reverse-mode procedure is linked to previous work by Maclaurin et al (2015) but does not require reversible dynamics. Additionally, we explore the use of constraints on the hyperparameters. The forward-mode procedure is suitable for real-time hyperparameter updates, which may significantly speedup hyperparameter optimization on large datasets. We present a series of experiments on image and phone classification tasks. In the second task, previous gradient-based approaches are prohibitive. We show that our real-time algorithm yields state-of-the-art results in affordable time.
Author Information
Luca Franceschi (IIT and UCL)
Michele Donini (IIT)
Paolo Frasconi (University of Florence)
Massimiliano Pontil (University College London)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Poster: Forward and Reverse Gradient-Based Hyperparameter Optimization »
Tue. Aug 8th 08:30 AM -- 12:00 PM Room Gallery #92
More from the Same Authors
-
2019 Poster: Learning-to-Learn Stochastic Gradient Descent with Biased Regularization »
Giulia Denevi · Carlo Ciliberto · Riccardo Grazzi · Massimiliano Pontil -
2019 Oral: Learning-to-Learn Stochastic Gradient Descent with Biased Regularization »
Giulia Denevi · Carlo Ciliberto · Riccardo Grazzi · Massimiliano Pontil -
2018 Poster: Bilevel Programming for Hyperparameter Optimization and Meta-Learning »
Luca Franceschi · Paolo Frasconi · Saverio Salzo · Riccardo Grazzi · Massimiliano Pontil -
2018 Oral: Bilevel Programming for Hyperparameter Optimization and Meta-Learning »
Luca Franceschi · Paolo Frasconi · Saverio Salzo · Riccardo Grazzi · Massimiliano Pontil