Skip to yearly menu bar Skip to main content


Poster

Fast-Rate PAC-Bayesian Generalization Bounds for Meta-Learning

Jiechao Guan · Zhiwu Lu

Hall E #626

Keywords: [ PM: Bayesian Models and Methods ] [ MISC: Supervised Learning ] [ T: Domain Adaptation and Transfer Learning ] [ T: Probabilistic Methods ] [ T: Miscellaneous Aspects of Machine Learning ] [ MISC: Transfer, Multitask and Meta-learning ]


Abstract:

PAC-Bayesian error bounds provide a theoretical guarantee on the generalization abilities of meta-learning from training tasks to unseen tasks. However, it is still unclear how tight PAC-Bayesian bounds we can achieve for meta-learning. In this work, we propose a general PAC-Bayesian framework to cope with single-task learning and meta-learning uniformly. With this framework, we generalize the two tightest PAC-Bayesian bounds (i.e., kl-bound and Catoni-bound) from single-task learning to standard meta-learning, resulting in fast convergence rates for PAC-Bayesian meta-learners. By minimizing the derived two bounds, we develop two meta-learning algorithms for classification problems with deep neural networks. For regression problems, by setting Gibbs optimal posterior for each training task, we obtain the closed-form formula of the minimizer of our Catoni-bound, leading to an efficient Gibbs meta-learning algorithm. Although minimizing our kl-bound can not yield a closed-form solution, we show that it can be extended for analyzing the more challenging meta-learning setting where samples from different training tasks exhibit interdependencies. Experiments empirically show that our proposed meta-learning algorithms achieve competitive results with respect to latest works.

Chat is not available.