FrontierCS: Evolving Challenges for Evolving Intelligence
Abstract
We introduce FrontierCS, a benchmark of 240 open-ended problems across diverse areas of computer science, designed and reviewed by experts, including CS PhDs and top-tier competitive programming participants and problem setters. Unlike existing benchmarks that focus on tasks with known optimal solutions, FrontierCS targets problems where the optimal solution is unknown, but the quality of a solution can be objectively evaluated. Models solve these tasks by implementing executable programs rather than outputting a direct answer. FrontierCS includes algorithmic problems, which are often NP-hard variants of competitive programming problems with objective partial scoring, and research problems with the same property. For each problem, we provide an expert reference solution and an automatic evaluator. Combining open-ended design, measurable progress, and expert curation, FrontierCS provides a benchmark at the frontier of computer-science difficulty. Empirically, we find that frontier reasoning models still lag far behind human experts, and that simply increasing reasoning budgets does not close this gap on open-ended challenges. Moreover, these models struggle to identify internal equivalence classes, and existing agentic frameworks also exhibit brittleness on such problems due to overfitting. FrontierCS thus offers a new lens into model capabilities on real frontier computer science problems.