SWE-MiniSandbox: Container-Free Reinforcement Learning for Building Software Engineering Agents
Abstract
Reinforcement learning (RL) has become a key paradigm for training software engineering (SWE) agents, yet its practical accessibility and scalability is often constrained by container-based execution frameworks used for environment isolation. As the number of task instances increases, pre-cached container images introduce substantial storage overhead, limiting large-scale training under limited cotainer resources, and excludes users without container management privileges. We introduce SWE-MiniSandbox, a lightweight, container-free method that enables scalable RL training of SWE agents without sacrificing isolation. Instead of relying on per-instance containers, SWE-MiniSandbox executes each task in an isolated workspace backed by kernel-level mechanisms, substantially reducing system overhead. It leverages lightweight environment pre-caching techniques to eliminate the need for bulky container images. As a result, our approach lowers disk usage to approximately 5\% of that required by container-based pipelines and reduces environment preparation time to about 25\% of the container baseline. Empirical results demonstrate that SWE-MiniSandbox achieves evaluation performance comparable to standard container-based pipelines. Consequently, by removing the dependency on heavy container infrastructure, SWE-MiniSandbox offers a practical and accessible foundation for scaling RL-based SWE agents, particularly in resource-constrained research environments.