MixReasoning: Switching Modes to Think
Abstract
Reasoning models enhance performance by tackling problems in a step-by-step manner, decomposing them into sub-problems and exploring long chains of thought before producing an answer. However, applying extended reasoning to every step introduces substantial redundancy, as sub-problems vary widely in difficulty and complexity: a small number of pivotal steps are genuinely challenging and decisive for the final answer, while many others only involve straightforward revisions or simple computations. Therefore, a natural idea is to endow reasoning models with the ability to adaptively respond to this variation, rather than treating all steps with the same level of elaboration. To this end, we propose MixReasoning, a framework that dynamically adjusts the depth of reasoning within a single response. MixReasoning enables fine-grained mode switching by training a lightweight concise LoRA adapter and control its strength to trigger switches based on reasoning difficulty estimated from sliding-window token confidence, yielding human-like transitions between fast and slow reasoning. The resulting chain of thought then becomes a mixture of detailed reasoning on difficult steps and concise inference on simpler ones. Experiments on AIME24, MATH-500, GPQA, and GSM8K demonstrate that MixReasoning shortens reasoning length by 13\%--49\% across benchmarks of varying difficulty, delivering consistent efficiency gains while maintaining performance.