When weighted sum perform well on multi-task learning
Abstract
Multi-task learning (MTL) involves training models for multiple tasks simultaneously, which has shown to accelerate training and improve generalization. Recently, MTL has been addressed as a multi-objective optimization problem. In this sense, most of the works dealing with the problem use weighted sum approaches, which are known to fail in certain scenarios, although there are some success stories within MTL. This study investigates the performance of the weighted sum method in MTL using standard multi-task vision datasets. We analyze different performance metrics to highlight the effectiveness of the weighted sum for MTL. Our preliminary findings reveal that the Pareto fronts are highly convex, which can explain the approach's success even when compared to more complex methods.