DenseSteer: Steering Small Language Models towards Dense Math Reasoning
Yang Ouyang ⋅ Shuhang Lin ⋅ Jung-Eun Kim
Abstract
Large language models (LLMs) demonstrate strong chain-of-thought (CoT) reasoning abilities, while smaller models ($\leq$ 3B parameters) significantly underperform on multi-step reasoning tasks. Based on empirical analyses of the Qwen-2.5 model family on math reasoning benchmarks, we find that more proficient reasoning is associated with fewer reasoning steps but higher information density per step, a property we term *Dense Reasoning*. Motivated by this observation, we propose **DenseSteer**, a training-free inference-time steering framework that enhances small-model reasoning by modulating internal representations toward dense reasoning patterns. Experiments show that our method yields consistent accuracy improvements without increasing token-level Negative Log-Likelihood, highlighting dense reasoning as an effective structural approach to mathematical problem solving.
Successful Page Load