RLAnything: Forge Environment, Policy, and Reward Model in Completely Dynamic RL System
Abstract
The quality of both the environment and the reward model fundamentally governs the effectiveness of reinforcement learning. Accordingly, we propose RLAnything, a reinforcement learning framework that dynamically optimizes each component through closed-loop optimization, amplifying learning signals and strengthening the overall system. Specifically, the policy is trained with integrated feedback from step-wise and outcome signals, while the reward model is jointly optimized via consistency feedback, which in turn further improves policy training. Moreover, our theory-motivated automatic environment adaptation improves training for both the reward and policy models by leveraging critic feedback from each, enabling learning from experience. Empirically, each added component consistently improves the overall system, and RLAnything yields substantial gains in practical applications, boosting Qwen3-VL-8B-Thinking by 8.5% on OSWorld and Qwen2.5-7B-Instruct by 21.2% and 12.1% on AlfWorld and LiveBench, respectively.