Skip to yearly menu bar Skip to main content


Poster

Soup-of-Experts: Pretraining Specialist Models via Parameters Averaging

Pierre Ablin · Angelos Katharopoulos · Skyler Seto · David Grangier

East Exhibition Hall A-B #E-1702
[ ] [ ]
Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Machine learning models are routinely trained on a mixture of different data domains. Different domain weights yield very different downstream performances.We propose the Soup-of-Experts, a novel architecture that can instantiate a model at test time for any domain weights with minimal computational cost and without re-training the model. Our architecture consists of a bank of expert parameters, which are linearly combined to instantiate one model. We learn the linear combination coefficients as a function of the input domain weights.To train this architecture, we sample random domain weights, instantiate the corresponding model, and backprop through one batch of data sampled with these domain weights.We demonstrate how our approach obtains small specialized models on several language modeling tasks quickly.Soup-of-Experts are particularly appealing when one needs to ship many different specialist models quickly under a size constraint.

Lay Summary:

We propose a new neural network architecture that holds many parameters that are trained jointly. Unlike standard architecture, when we want to use the model to address a new task, we first select a relevant small subset of the parameters of the model, and then use only these parameters to address the new task. Since each task requires a small number of parameters, the models are very efficient.

Live content is unavailable. Log in and register to view live content