Timezone: »

 
Poster
Which Tasks Should Be Learned Together in Multi-task Learning?
Trevor Standley · Amir Zamir · Dawn Chen · Leonidas Guibas · Jitendra Malik · Silvio Savarese

Tue Jul 14 11:00 AM -- 11:45 AM & Tue Jul 14 10:00 PM -- 10:45 PM (PDT) @

Many computer vision applications require solving multiple tasks in real-time. A neural network can be trained to solve multiple tasks simultaneously using multi-task learning. This can save computation at inference time as only a single network needs to be evaluated. Unfortunately, this often leads to inferior overall performance as task objectives can compete, which consequently poses the question: which tasks should and should not be learned together in one network when employing multi-task learning? We study task cooperation and competition in several different learning settings and propose a framework for assigning tasks to a few neural networks such that cooperating tasks are computed by the same neural network, while competing tasks are computed by different networks. Our framework offers a time-accuracy trade-off and can produce better accuracy using less inference time than not only a single large multi-task neural network but also many single-task networks.

Author Information

Trevor Standley (Stanford University)
Amir Zamir (Swiss Federal Institute of Technology (EPFL))
Dawn Chen (Google)
Leonidas Guibas (Stanford University)
Jitendra Malik (University of California at Berkeley)
Silvio Savarese (Stanford University)

More from the Same Authors