SWE-ABS: Adversarial Benchmark Strengthening Exposes Inflated Success Rates on Test-based Benchmark
Boxi Yu ⋅ Yang Cao ⋅ Yuzhong Zhang ⋅ Liting Lin ⋅ Junjielong Xu ⋅ Zhiqing Zhong ⋅ Qinghua Xu ⋅ Guancheng Wang ⋅ Jialun Cao ⋅ Shing-Chi Cheung ⋅ Pinjia He ⋅ Lionel BRIAND
Abstract
The SWE-Bench Verified leaderboard is approaching saturation, with the top system achieving 78.80\%. However, we reveal that this performance is inflated: our re-evaluation demonstrates that one in five "solved" patches from the top-30 agents are semantically incorrect, passing only because weak test suites fail to expose their errors. We present SWE-ABS, an adversarial framework that strengthens test suites through a two-stage pipeline: (1) coverage-driven augmentation utilizing program slicing to target untested code regions, and (2) mutation-driven adversarial testing that synthesizes plausible-but-incorrect patches to expose semantic blind spots. On SWE-Bench Verified (500 instances), SWE-ABS strengthens 48.4\% of instances (a $24.2\times$ improvement over prior work) and rejects 21.4\% of previously passing patches. Consequently, the top agent's score decreases from 78.80\% to 61.80\%, causing significant leaderboard reshuffling (e.g., the top-ranked agent drops to 4th place).
Successful Page Load