Timezone: »

 
Poster
Provable Meta-Learning of Linear Representations
Nilesh Tripuraneni · Chi Jin · Michael Jordan

Wed Jul 21 09:00 PM -- 11:00 PM (PDT) @

Meta-learning, or learning-to-learn, seeks to design algorithms that can utilize previous experience to rapidly learn new skills or adapt to new environments. Representation learning---a key tool for performing meta-learning---learns a data representation that can transfer knowledge across multiple tasks, which is essential in regimes where data is scarce. Despite a recent surge of interest in the practice of meta-learning, the theoretical underpinnings of meta-learning algorithms are lacking, especially in the context of learning transferable representations. In this paper, we focus on the problem of multi-task linear regression---in which multiple linear regression models share a common, low-dimensional linear representation. Here, we provide provably fast, sample-efficient algorithms to address the dual challenges of (1) learning a common set of features from multiple, related tasks, and (2) transferring this knowledge to new, unseen tasks. Both are central to the general problem of meta-learning. Finally, we complement these results by providing information-theoretic lower bounds on the sample complexity of learning these linear features.

Author Information

Nilesh Tripuraneni (UC Berkeley)
Chi Jin (Princeton University)
Michael Jordan (UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors