Poster

PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees

Jonas Rothfuss · Vincent Fortuin · Martin Josifoski · Andreas Krause

Keywords: [ Algorithms ] [ Multitask, Transfer, and Meta Learning ]

[ Abstract ]
[ Slides [ Paper ] [ Visit Poster at Spot D1 in Virtual World ]
Thu 22 Jul 9 a.m. PDT — 11 a.m. PDT
 
Spotlight presentation: Reinforcement Learning 15
Thu 22 Jul 5 a.m. PDT — 6 a.m. PDT

Abstract:

Meta-learning can successfully acquire useful inductive biases from data. Yet, its generalization properties to unseen learning tasks are poorly understood. Particularly if the number of meta-training tasks is small, this raises concerns about overfitting. We provide a theoretical analysis using the PAC-Bayesian framework and derive novel generalization bounds for meta-learning. Using these bounds, we develop a class of PAC-optimal meta-learning algorithms with performance guarantees and a principled meta-level regularization. Unlike previous PAC-Bayesian meta-learners, our method results in a standard stochastic optimization problem which can be solved efficiently and scales well.When instantiating our PAC-optimal hyper-posterior (PACOH) with Gaussian processes and Bayesian Neural Networks as base learners, the resulting methods yield state-of-the-art performance, both in terms of predictive accuracy and the quality of uncertainty estimates. Thanks to their principled treatment of uncertainty, our meta-learners can also be successfully employed for sequential decision problems.

Chat is not available.