Skip to yearly menu bar Skip to main content


Poster

Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks

Liam Collins · Hamed Hassani · Mahdi Soltanolkotabi · Aryan Mokhtari · Sanjay Shakkottai

Hall C 4-9 #2217
[ ] [ Paper PDF ]
Tue 23 Jul 4:30 a.m. PDT — 6 a.m. PDT
 
Oral presentation: Oral 2A Representation Learning 1
Tue 23 Jul 7:30 a.m. PDT — 8:30 a.m. PDT

Abstract: An increasingly popular machine learning paradigm is to pretrain a neural network (NN) on many tasks offline, then adapt it to downstream tasks, often by re-training only the last linear layer of the network. This approach yields strong downstream performance in a variety of contexts, demonstrating that multitask pretraining leads to effective feature learning. Although several recent theoretical studies have shown that shallow NNs learn meaningful features when either (i) they are trained on a *single* task or (ii) they are *linear*, very little is known about the closer-to-practice case of *nonlinear* NNs trained on *multiple* tasks. In this work, we present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks. Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks. Using this observation, we show that when the tasks are binary classification tasks with labels depending on the projection of the data onto an $r$-dimensional subspace within the $d\gg r$-dimensional input space, a simple gradient-based multitask learning algorithm on a two-layer ReLU NN recovers this projection, allowing for generalization to downstream tasks with sample and neuron complexity independent of $d$. In contrast, we show that with high probability over the draw of a single task, training on this single task cannot guarantee to learn all $r$ ground-truth features.

Live content is unavailable. Log in and register to view live content