Timezone: »

Learning What to Defer for Maximum Independent Sets
Sungsoo Ahn · Younggyo Seo · Jinwoo Shin

Wed Jul 15 05:00 AM -- 05:45 AM & Wed Jul 15 07:00 PM -- 07:45 PM (PDT) @

Designing efficient algorithms for combinatorial optimization appears ubiquitously in various scientific fields. Recently, deep reinforcement learning (DRL) frameworks have gained considerable attention as a new approach: they can automate the design of a solver while relying less on sophisticated domain knowledge of the target problem. However, the existing DRL solvers determine the solution using a number of stages proportional to the number of elements in the solution, which severely limits their applicability to large-scale graphs. In this paper, we seek to resolve this issue by proposing a novel DRL scheme, coined learning what to defer (LwD), where the agent adaptively shrinks or stretch the number of stages by learning to distribute the element-wise decisions of the solution at each stage. We apply the proposed framework to the maximum independent set (MIS) problem, and demonstrate its significant improvement over the current state-of-the-art DRL scheme. We also show that LwD can outperform the conventional MIS solvers on large-scale graphs having millions of vertices, under a limited time budget.

Author Information

Sungsoo Ahn (KAIST)
Younggyo Seo (KAIST)
Jinwoo Shin (KAIST)

More from the Same Authors