Developing algorithms that are able to generalize to a novel task given only a few labeled examples represents a fundamental challenge in closing the gap between machine- and human-level performance. The core of human cognition lies in the structured, reusable concepts that help us to rapidly adapt to new tasks and provide reasoning behind our decisions. However, existing meta-learning methods learn complex representations across prior labeled tasks without imposing any structure on the learned representations. In this talk I will discuss how meta-learning methods can improve generalization ability by learning to learn along human-interpretable concept dimensions. Instead of learning a joint unstructured metric space. We learn mappings of high-level concepts into semi-structured metric spaces, and effectively combine the outputs of independent concept learners. Experiments on diverse domains, including a benchmark image classification dataset and a novel single-cell dataset from a biological domain show significant gains over strong meta-learning baselines.