A Unified View on PAC-Bayes Bounds for Meta-Learning

Arezou Rezazadeh

Room 310
[ Abstract ] [ Livestream: Visit Probabilistic Methods/MISC ]
Thu 21 Jul 11:40 a.m. — 11:45 a.m. PDT
[ Slides [ Paper PDF

Meta learning automatically infers an inductivebias, that includes the hyperparameter of the baselearningalgorithm, by observing data from a finitenumber of related tasks. This paper studiesPAC-Bayes bounds on meta generalizationgap. The meta-generalization gap comprises twosources of generalization gaps: the environmentleveland task-level gaps resulting from observationof a finite number of tasks and data samplesper task, respectively. In this paper, by upperbounding arbitrary convex functions, which linkthe expected and empirical losses at the environmentand also per-task levels, we obtain new PAC-Bayesbounds. Using these bounds, we developnew PAC-Bayes meta-learning algorithms. Numericalexamples demonstrate the merits of theproposed novel bounds and algorithm in comparisonto prior PAC-Bayes bounds for meta-learning

Chat is not available.