Oral
TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning
Sung Whan Yoon · Jun Seo · Jaekyun Moon
Handling previously unseen tasks after given only a few training examples continues to be a tough challenge in machine learning. We propose TapNets, a neural network augmented with task-adaptive projection for improved few-shot learning. Here, employing a meta-learning strategy with episode-based training, a network and a set of per-class reference vectors are learned slowly over widely varying tasks. At the same time, for every episode, features in the embedding space are linearly projected into a new space as a form of quick task-specific conditioning. Training loss is obtained based on a distance metric between the query and the reference vectors in the projection space. Excellent generalization results in this way. When tested on the Omniglot, miniImageNet and tieredImageNet datasets, we obtain state of the art classification accuracies under different few-shot scenarios.