Neural networks have been successfully applied in applications with a large amount of labeled data. However, the task of rapid generalization on new concepts with small training data while preserving performances on previously learned ones still presents a significant challenge to neural network models. In this work, we introduce a novel meta learning method, Meta Networks (MetaNet), that learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization. When evaluated on Omniglot and Mini-ImageNet benchmarks, our MetaNet models achieve a near human-level performance and outperform the baseline approaches by up to 6% accuracy. We demonstrate several appealing properties of MetaNet relating to generalization and continual learning.
Tsendsuren Munkhdalai (University of Massachusetts)
Hong Yu (University of Massachusetts)
Related Events (a corresponding poster, oral, or spotlight)
2017 Talk: Meta Networks »
Mon Aug 7th 05:48 -- 06:06 AM Room Darling Harbour Theatre