Timezone: »

Sharp-MAML: Sharpness-Aware Model-Agnostic Meta Learning
Momin Abbas · Quan Xiao · Lisha Chen · Pin-Yu Chen · Tianyi Chen

Thu Jul 21 11:30 AM -- 11:35 AM (PDT) @ Room 309

Model-agnostic meta learning (MAML) is currently one of the dominating approaches for few-shot meta-learning. Albeit its effectiveness, the training of MAML can be challenging due to the innate bilevel problem structure. Specifically, the loss landscape of MAML is much complex with possibly many more saddle points and local minima than its empirical risk minimization counterpart. To address this challenge, we leverage the recently invented sharpness-aware minimization and develop a sharpness-aware MAML approach that we term Sharp-MAML. We empirically demonstrate that Sharp-MAML and its computation-efficient variant can outperform popular existing MAML baselines (e.g., +12% accuracy on Mini-Imagenet). We complement the empirical study with the convergence analysis and the generalization bound of Sharp-MAML. To the best of our knowledge, this is the first empirical and theoretical study on sharpness-aware minimization in the context of bilevel optimization.

Author Information

Momin Abbas (Rensselaer Polytechnic Institute)
Quan Xiao (Rensselaer Polytechnic Institute)
Lisha Chen (Rensselaer Polytechnic Institute)
Pin-Yu Chen (IBM Research AI)
Tianyi Chen (Rensselaer Polytechnic Institute)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors