A General Neural Backbone for Mixed-Integer Linear Optimization via Dual Attention
Abstract
Mixed-integer linear programming (MILP) is a foundational framework for combinatorial optimization across science and engineering, but remains hard to solve at scale due to NP-hardness.Recent learning-based methods typically model MILP instances as variable–constraint bipartite graphs and use Graph Neural Networks (GNNs) for representation learning, yet their locality limits representation power.We propose an attention-driven neural backbone that adopts an element-centric view of variables and constraints, with dual attention performing parallel intra-type self-attention and inter-type cross-attention.Across three representative tasks at the instance, element, and solving-state levels, our model consistently outperforms conventional GNN-based architectures, highlighting attention-based, element-centric modeling as a powerful foundation for learning-enhanced combinatorial optimization.