Timezone: »

 
Spotlight
Large-Scale Meta-Learning with Continual Trajectory Shifting
JaeWoong Shin · Hae Beom Lee · Boqing Gong · Sung Ju Hwang

Thu Jul 22 05:35 PM -- 05:40 PM (PDT) @

Meta-learning of shared initialization parameters has shown to be highly effective in solving few-shot learning tasks. However, extending the framework to many-shot scenarios, which may further enhance its practicality, has been relatively overlooked due to the technical difficulties of meta-learning over long chains of inner-gradient steps. In this paper, we first show that allowing the meta-learners to take a larger number of inner gradient steps better captures the structure of heterogeneous and large-scale task distributions, thus results in obtaining better initialization points. Further, in order to increase the frequency of meta-updates even with the excessively long inner-optimization trajectories, we propose to estimate the required shift of the task-specific parameters with respect to the change of the initialization parameters. By doing so, we can arbitrarily increase the frequency of meta-updates and thus greatly improve the meta-level convergence as well as the quality of the learned initializations. We validate our method on a heterogeneous set of large-scale tasks, and show that the algorithm largely outperforms the previous first-order meta-learning methods in terms of both generalization performance and convergence, as well as multi-task learning and fine-tuning baselines.

Author Information

JaeWoong Shin (KAIST)
Hae Beom Lee (KAIST)
Boqing Gong (Google)
Sung Ju Hwang (KAIST, AITRICS)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors