D-CORE: Incentivizing Task Decomposition in Large Reasoning Models for Complex Tool Use
Abstract
Effective tool use and reasoning are essential capabilities for large reasoning models (LRMs) to address complex real-world problems. Through empirical analysis, we identify a prevalent "Lazy Reasoning" phenomenon, where LRMs frequently engage in repetitive and meaningless reflective reasoning. This occurs primarily due to their inadequate ability to decompose tasks when reasoning in complex tool use scenarios. To address this, we propose a two-stage training framework D-CORE ( Decomposing tasks and Composing Reasoning processes) that first incentivize the LRM’s task decomposition reasoning capability via self-distillation, followed by diversity-aware reinforcement learning (RL) to restore LRM's reflective reasoning capability. D-CORE achieves robust tool-use improvements across diverse benchmarks and model scales. Experiments on BFCLv3 demonstrate superiority of our method: D-CORE-8B reaches 77.7% accuracy, surpassing the best-performing 8B model by 5.7%. Meanwhile, D-CORE-14B establishes a new state-of-the-art at 79.3%, outperforming 70B models despite being 5× smaller. The source code and data sample are in the supplementary material.