UniCode: Augmenting Evaluation for Code Reasoning
Abstract
Current coding benchmarks often inflate Large Language Model (LLM) capabilities due to static paradigms and data contamination, enabling models to exploit statistical shortcuts rather than genuine reasoning. To address this, we introduce \textbf{UniCode}, a generative evaluation framework that systematically probes LLM limits via: (1) multi-dimensional augmentation transforming seed problems into complex variations to disrupt fixed algorithmic patterns; (2) a highly reliable, automated test generation pipeline for scalable evaluation; and (3) fine-grained metrics for rich error signals. Experiments reveal a 31.2\% performance collapse in state-of-the-art models on UniCode, primarily driven by deficiencies in conceptual modeling and scalability reasoning rather than syntactic errors. Furthermore, we uncover a ``seed-problem regression" where models revert to memorized seed logic rather than following new specifications, signaling a reliance on shortcuts over reasoning. This work validates UniCode as a robust framework to expose model fragility and foster reasoning-oriented code intelligence.