Skip to yearly menu bar Skip to main content


Poster

Which Tasks Should Be Learned Together in Multi-task Learning?

Trevor Standley · Amir Zamir · Dawn Chen · Leonidas Guibas · Jitendra Malik · Silvio Savarese

Keywords: [ Computer Vision ] [ Supervised Learning ] [ Transfer and Multitask Learning ] [ Transfer, Multitask and Meta-learning ]


Abstract:

Many computer vision applications require solving multiple tasks in real-time. A neural network can be trained to solve multiple tasks simultaneously using multi-task learning. This can save computation at inference time as only a single network needs to be evaluated. Unfortunately, this often leads to inferior overall performance as task objectives can compete, which consequently poses the question: which tasks should and should not be learned together in one network when employing multi-task learning? We study task cooperation and competition in several different learning settings and propose a framework for assigning tasks to a few neural networks such that cooperating tasks are computed by the same neural network, while competing tasks are computed by different networks. Our framework offers a time-accuracy trade-off and can produce better accuracy using less inference time than not only a single large multi-task neural network but also many single-task networks.

Chat is not available.