Near-optimal and Efficient First-Order Algorithm for Multi-Task Learning with Shared Linear Representation
Shihong Ding ⋅ Fangyu Du ⋅ Cong Fang
Abstract
Multi-task learning (MTL) has emerged as a pivotal paradigm in machine learning by leveraging shared structures across multiple related tasks. Despite its empirical success, the development of likelihood-based efficiently solvable algorithms—even for shared linear representations—remains largely underdeveloped, primarily due to the non-convex structure intrinsic to matrix factorization. This paper introduces a first-order algorithm that jointly learns a shared representation and task-specific parameters, with guaranteed computational efficiency. Notably, it converges in $\tilde{O}(1)$ iterations and attains a \emph{near-optimal} estimation error of $\widetilde{O}(dk/(TN))$, \emph{improving} over existing likelihood-based methods by a factor of $k$, where $d$, $k$, $T$, $N$ denote input dimension, representation dimension, task count, and samples per task, respectively. Our results demonstrate that likelihood-based first-order methods can also efficiently solve the MTL problem.
Successful Page Load