Timezone: »
Meta-learning aims to extract meta-knowledge from historical tasks to accelerate learning on new tasks. Typical meta-learning algorithms like MAML learn a globally-shared meta-model for all tasks. However, when the task environments are complex, task model parameters are diverse and a common meta-model is insufficient to capture all the meta-knowledge. To address this challenge, in this paper, task model parameters are structured into multiple subspaces, and each subspace represents one type of meta-knowledge. We propose an algorithm to learn the meta-parameters (\ie, subspace bases). We theoretically study the generalization properties of the learned subspaces. Experiments on regression and classification meta-learning datasets verify the effectiveness of the proposed algorithm.
Author Information
Weisen Jiang (Hong Kong University of Science and Technology)
James Kwok (Hong Kong University of Science and Technology)
Yu Zhang (Hong Kong University of Science and Technology)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Subspace Learning for Effective Meta-Learning »
Thu. Jul 21st through Fri the 22nd Room Hall E #535
More from the Same Authors
-
2023 Poster: Effective Structured Prompting by Meta-Learning and Representative Verbalizer »
Weisen Jiang · Yu Zhang · James Kwok -
2023 Poster: Non-autoregressive Conditional Diffusion Models for Time Series Prediction »
Lifeng Shen · James Kwok -
2023 Poster: Nonparametric Iterative Machine Teaching »
CHEN ZHANG · Xiaofeng Cao · Weiyang Liu · Ivor Tsang · James Kwok -
2022 Poster: Efficient Variance Reduction for Meta-learning »
Hansi Yang · James Kwok -
2022 Spotlight: Efficient Variance Reduction for Meta-learning »
Hansi Yang · James Kwok -
2021 Poster: SparseBERT: Rethinking the Importance Analysis in Self-attention »
Han Shi · Jiahui Gao · Xiaozhe Ren · Hang Xu · Xiaodan Liang · Zhenguo Li · James Kwok -
2021 Spotlight: SparseBERT: Rethinking the Importance Analysis in Self-attention »
Han Shi · Jiahui Gao · Xiaozhe Ren · Hang Xu · Xiaodan Liang · Zhenguo Li · James Kwok -
2020 Poster: Searching to Exploit Memorization Effect in Learning with Noisy Labels »
QUANMING YAO · Hansi Yang · Bo Han · Gang Niu · James Kwok -
2019 Poster: Efficient Nonconvex Regularized Tensor Completion with Structure-aware Proximal Iterations »
Quanming Yao · James Kwok · Bo Han -
2019 Oral: Efficient Nonconvex Regularized Tensor Completion with Structure-aware Proximal Iterations »
Quanming Yao · James Kwok · Bo Han -
2018 Poster: Online Convolutional Sparse Coding with Sample-Dependent Dictionary »
Yaqing WANG · Quanming Yao · James Kwok · Lionel NI -
2018 Poster: Lightweight Stochastic Optimization for Minimizing Finite Sums with Infinite Data »
Shuai Zheng · James Kwok -
2018 Oral: Lightweight Stochastic Optimization for Minimizing Finite Sums with Infinite Data »
Shuai Zheng · James Kwok -
2018 Oral: Online Convolutional Sparse Coding with Sample-Dependent Dictionary »
Yaqing WANG · Quanming Yao · James Kwok · Lionel NI -
2017 Poster: Follow the Moving Leader in Deep Learning »
Shuai Zheng · James Kwok -
2017 Talk: Follow the Moving Leader in Deep Learning »
Shuai Zheng · James Kwok