Timezone: »
In Multi-Task Learning (MTL), tasks may compete and limit the performance achieved on each other, rather than guiding the optimization to a solution, superior to all its single-task trained counterparts. Since there is often not a unique solution optimal for all tasks, practitioners have to balance tradeoffs between tasks' performance, and resort to optimality in the Pareto sense. Most MTL methodologies either completely neglect this aspect, and instead of aiming at learning a Pareto Front, produce one solution predefined by their optimization schemes, or produce diverse but discrete solutions. Recent approaches parameterize the Pareto Front via neural networks, leading to complex mappings from tradeoff to objective space. In this paper, we conjecture that the Pareto Front admits a linear parameterization in parameter space, which leads us to propose Pareto Manifold Learning, an ensembling method in weight space. Our approach produces a continuous Pareto Front in a single training run, that allows to modulate the performance on each task during inference. Experiments on multi-task learning benchmarks, ranging from image classification to tabular datasets and scene understanding, show that Pareto Manifold Learning outperforms state-of-the-art single-point algorithms, while learning a better Pareto parameterization than multi-point baselines.
Author Information
Nikolaos Dimitriadis (EPFL)
Pascal Frossard (EPFL)
François Fleuret (University of Geneva)
More from the Same Authors
-
2022 : Catastrophic overfitting is a bug but also a feature »
Guillermo Ortiz Jimenez · Pau de Jorge Aranda · Amartya Sanyal · Adel Bibi · Puneet Dokania · Pascal Frossard · Gregory Rogez · Phil Torr -
2023 : Towards Efficient World Models »
Eloi Alonso · Vincent Micheli · François Fleuret -
2023 : 🎤 Fast Causal Attention with Dynamic Sparsity »
Daniele Paliotta · Matteo Pagliardini · Martin Jaggi · François Fleuret -
2023 : DeepEMD: A Transformer-based Fast Estimation of the Earth Mover's Distance »
Atul Kumar Sinha · François Fleuret -
2020 Poster: Optimizer Benchmarking Needs to Account for Hyperparameter Tuning »
Prabhu Teja Sivaprasad · Florian Mai · Thijs Vogels · Martin Jaggi · François Fleuret -
2020 Poster: Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention »
Angelos Katharopoulos · Apoorv Vyas · Nikolaos Pappas · François Fleuret -
2019 Poster: Geometry Aware Convolutional Filters for Omnidirectional Images Representation »
Renata Khasanova · Pascal Frossard -
2019 Oral: Geometry Aware Convolutional Filters for Omnidirectional Images Representation »
Renata Khasanova · Pascal Frossard -
2017 Poster: Graph-based Isometry Invariant Representation Learning »
Renata Khasanova · Pascal Frossard -
2017 Talk: Graph-based Isometry Invariant Representation Learning »
Renata Khasanova · Pascal Frossard