EVMbench: Evaluating AI Agents on Smart Contract Security
Abstract
Smart contracts on public blockchains now manage large amounts of value, and vulnerabilities in these systems can lead to substantial losses. As AI agents become more capable at reading, writing, and running code, it is natural to ask how well they can already navigate this landscape, both in ways that improve security and in ways that might increase risk. We introduce EVMbench, an evaluation that measures the ability of agents to detect, patch, and exploit smart contract vulnerabilities. EVMbench draws on 120 curated vulnerabilities from 37 repositories and, in the most realistic setting, uses programmatic grading based on tests and blockchain state under a local Ethereum execution environment. We evaluate a range of frontier agents and find that they are capable of discovering and exploiting vulnerabilities end-to-end against live blockchain instances. We also compare various agent scaffolds and find that in some cases performance gains due to scaffolding improvements alone rival gains due to increased model quality. We release code, tasks, and tooling to support continued measurement of these capabilities and future work on security.