SVL: Goal-Conditioned Reinforcement Learning as Survival Learning
Abstract
Standard approaches to goal-conditioned reinforcement learning (GCRL) that rely on temporal-difference learning can be unstable and sample-inefficient due to bootstrapping. While recent work has explored contrastive and supervised formulations to improve stability, we present a probabilistic alternative, called survival value learning (SVL), that reframes GCRL as a survival learning problem by modeling the time‑to‑goal from each state as a probability distribution. This perspective yields a closed‑form identity that expresses the goal‑conditioned value function as a discounted sum of survival probabilities, enabling value estimation via a hazard model trained via maximum likelihood on both event and right‑censored trajectories. We introduce three practical value estimators, including finite-horizon truncation and two binned infinite-horizon approximations to capture long-horizon objectives. Experiments on standard offline GCRL benchmarks show that SVL combined with hierarchical actors matches or surpasses strong hierarchical TD baselines, particularly excelling on complex, long‑horizon tasks.