Unsupervised Diffusion for Combinatorial Optimization via Adjoint Matching
Abstract
Neural solvers have recently emerged as powerful tools for combinatorial optimization (CO). Among them, diffusion models have shown strong promise due to their ability to capture highly multimodal solution distributions in CO through iterative generative processes. However, training diffusion models typically requires large collections of near-optimal solutions, which limits their scalability and generalization. We address this fundamental challenge by extending adjoint matching, a powerful unsupervised diffusion training framework based on chain-rule–style gradient propagation in continuous spaces, to discrete combinatorial domains. Our approach resolves the broken-gradient issue inherent to discrete data and unifies local and global training objectives within a single principled framework. Empirically, our method consistently outperforms existing unsupervised baselines and achieves performance comparable to supervised diffusion models.