Problem Distributions as Tasks: Repurposing Meta Learning for Generative Combinatorial Optimization towards Multi-task Pretrain and Adaptation
Wenzheng Pan ⋅ Jiale Ma ⋅ Nuoyan Chen ⋅ Yang Li ⋅ Junchi Yan
Abstract
Despite the fast progress of Neural Combinatorial Optimization (NCO) on graphs, existing solvers mainly learn a narrow task (e.g., uniform TSP) at a time and hardly handle instances over diverse distributions. This paper proposes M$^2$GenCO, a Multi-task learning framework that pioneers the instantiation of the Meta-learning mechanism with diffusion-based Generative solving for CO Problems (COPs) on graphs, first formulating "tasks" in meta-learning as distinct problem types instead of instances of the same problem. With a tailored lightweight graph neural network, our framework performs effective joint pre-training on a variety of problem types and efficient fine-tuning to adapt for out-of-distribution scenarios. Further, we establish a benchmark comprising 5 classic graph COPs with varying scales and multiple distributions, forming 38 distinct test datasets that facilitate standard evaluation of generalizability and adaptability for NCO solvers. Empirically, M$^2$GenCO with greedy decoder yields an overall 9.16% performance gain with an average 95.6$\times$ acceleration for inference, and achieves concrete state-of-the-arts on all test sets with simple local searchers, maintaining superior solving time against previous neural methods. The computational resource and time consumption for training are saved by up to 82% and 91%, respectively.
Successful Page Load