Dynamic Thinking-Token Selection for Efficient Reasoning in Large Reasoning Models
zhenyuan guo ⋅ Tong Chen ⋅ Wenlong Meng ⋅ Chen GONG ⋅ Xin Yu ⋅ Chengkun Wei ⋅ Wenzhi CHEN
Abstract
Large Reasoning Models (LRMs) excel at solving complex problems by explicitly generating a reasoning trace before deriving the final answer. However, these extended generations incur substantial memory footprint and computational overhead, bottlenecking LRMs' efficiency. This work uses attention maps to analyze the influence of reasoning traces and uncover an interesting phenomenon: *only some decision-critical tokens in a reasoning trace steer the model toward the final answer, while the remaining tokens contribute negligibly.* Building on this observation, we propose **Dyn**amic **T**hinking-Token **S**election (**DynTS**). This method identifies decision-critical tokens and retains only their associated Key-Value (KV) cache states during inference, evicting the remaining redundant entries to optimize efficiency. Across six benchmarks, \toolname surpasses the state-of-the-art KV cache compression methods, improving Pass@1 by $2.6\\%$ under the same budget. Compared to vanilla Transformers, it reduces inference latency by $1.84–2.62\times$ and peak KV-cache memory footprint by $3.32–5.73\times$ without compromising LRMs' reasoning performance. The code is available at the anonymous link.\footnote{https://anonymous.4open.science/r/DynTS-2D0D}
Successful Page Load