Skip to yearly menu bar Skip to main content


Poster

A Multi-objective / Multi-task Learning Framework Induced by Pareto Stationarity

Michinari Momma · Chaosheng Dong · Jia Liu

Hall E #520

Keywords: [ OPT: Multi-objective Optimization ] [ APP: Everything Else ] [ MISC: Transfer, Multitask and Meta-learning ]


Abstract:

Multi-objective optimization (MOO) and multi-task learning (MTL) have gained much popularity with prevalent use cases such as production model development of regression / classification / ranking models with MOO, and training deep learning models with MTL. Despite the long history of research in MOO, its application to machine learning requires development of solution strategy, and algorithms have recently been developed to solve specific problems such as discovery of any Pareto optimal (PO) solution, and that with a particular form of preference. In this paper, we develop a novel and generic framework to discover a PO solution with multiple forms of preferences. It allows us to formulate a generic MOO / MTL problem to express a preference, which is solved to achieve both alignment with the preference and PO, at the same time. Specifically, we apply the framework to solve the weighted Chebyshev problem and an extension of that. The former is known as a method to discover the Pareto front, the latter helps to find a model that outperforms an existing model with only one run. Experimental results demonstrate not only the method achieves competitive performance with existing methods, but also it allows us to achieve the performance from different forms of preferences.

Chat is not available.