Timezone: »
Handling previously unseen tasks after given only a few training examples continues to be a tough challenge in machine learning. We propose TapNets, neural networks augmented with task-adaptive projection for improved few-shot learning. Here, employing a meta-learning strategy with episode-based training, a network and a set of per-class reference vectors are learned across widely varying tasks. At the same time, for every episode, features in the embedding space are linearly projected into a new space as a form of quick task-specific conditioning. The training loss is obtained based on a distance metric between the query and the reference vectors in the projection space. Excellent generalization results in this way. When tested on the Omniglot, miniImageNet and tieredImageNet datasets, we obtain state of the art classification accuracies under various few-shot scenarios.
Author Information
Sung Whan Yoon (Korea Advanced Institute of Science and Technology (KAIST))
I am currently an assistant professor of School of Electrical and Computer Engineering at Ulsan National Institute of Science and Technology (UNIST) from Mar. 2020. I received my Ph.D. from School of Electrical Engineering at Korea Advanced Institute of Science and Technology (KAIST), in Aug. 2017, under supervision by Prof. Jaekyun Moon. Before joining UNIST, I was a postdoctoral researcher at KAIST from Sep. 2017 to Mar. 2020. My research interests are in the area of artificial intelligence, distributed system and information theory. Recently, I'm dedicated to meta-learning algorithms with balanced inductive bias and adaptation for efficient few-shot classification (result is published in ICML 2019 & NeurIPS workshop 2018). As a future work, I'm interested in designing the incremental/continual meta-learning, which captures new concept quickly and build the combined concept incrementally (an earlier work is accepted to ICML 2020). For an another direction, I hope to develop an intelligent system built on efficient learning algorithms with following features: being able to learn from big data (even noisy and non-annotated), leveraging distributed resources (storage, computation and communication) and being hardware-friendly.
Jun Seo (Korea Advanced Institute of Science and Technology(KAIST))
Jaekyun Moon (KAIST)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning »
Tue. Jun 11th 06:30 -- 06:35 PM Room Hall A
More from the Same Authors
-
2022 : Style Balancing and Test-Time Style Shifting for Domain Generalization »
Jungwuk Park · Dong-Jun Han · Soyeong Kim · Jaekyun Moon -
2022 : Locally Supervised Learning with Periodic Global Guidance »
Hasnain Irshad Bhatti · Jaekyun Moon -
2022 Poster: GenLabel: Mixup Relabeling using Generative Models »
Jy yong Sohn · Liang Shang · Hongxu Chen · Jaekyun Moon · Dimitris Papailiopoulos · Kangwook Lee -
2022 Spotlight: GenLabel: Mixup Relabeling using Generative Models »
Jy yong Sohn · Liang Shang · Hongxu Chen · Jaekyun Moon · Dimitris Papailiopoulos · Kangwook Lee -
2020 Poster: XtarNet: Learning to Extract Task-Adaptive Representation for Incremental Few-Shot Learning »
Sung Whan Yoon · Do-Yeon Kim · Jun Seo · Jaekyun Moon