LoRe: Adaptive Interaction-Evaluation Routing with Per-step Interaction Budgets for Iterative Graph Solvers
Jintao Li ⋅ Yong-Yi Wang ⋅ Zheng-An Wang ⋅ Heng Fan
Abstract
Diffusion-based neural solvers for combinatorial optimization repeatedly re-evaluate dense edge/factor interactions, making inference expensive in wall-clock time and often memory-bound at scale. We introduce LoRe, a training-free, inference-time drop-in wrapper that enforces per-step interaction-evaluation budgeting: at each iteration, it evaluates only a fixed fraction of interactions by dynamically routing computation to high-conflict or high-uncertainty interactions, instead of using a fixed sparsification (e.g., static kNN graphs or static masks). Under fully inclusive end-to-end wall-clock accounting, LoRe substantially improves scalability on maximum independent set, pushing feasible inference beyond the baseline out-of-memory boundary by $2.5\times$ while delivering $4$--$5\times$ speedup, about 81% peak-memory reduction, and 86.9% MIS-size retention. As an auxiliary study on large-scale TSP, LoRe achieves $7.5\times$ median speedup at $n=1000$ with 97% median memory reduction and a median gap difference of $-0.22$ percentage points versus the baseline, supporting its generality across problem families.
Successful Page Load