RiskZero: Plan More to Risk Less with a Learned Model
Abstract
AlphaZero and MuZero have demonstrated superhuman performance across a range of strategic tasks. Yet their reliance on maximizing expected returns limits their use in real-world settings, where even high-return policies may incur rare but catastrophic failures. We introduce RiskZero to address this limitation; the first MuZero-family method for risk-sensitive decision-making, and planning with zero prior knowledge of environment dynamics. RiskZero learns distributional quantities to estimate trajectory-level risk, guiding search toward policies that explicitly avoid rare but severe outcomes. We establish theoretical convergence to optimal, stationary risk-sensitive policies and validate our approach on environments designed to test risk-sensitive learning from pixels, as well as on larger-scale combinatorial tasks. Across all settings, RiskZero consistently outperforms state-of-the-art risk-sensitive baselines, and improves sample efficiency, providing a general framework for safer and reliable model-based reinforcement learning under uncertainty.