Reinforcement Learning for Reachability: Guaranteeing Asymptotic Optimality
Abstract
{\em Reinforcement learning} (RL) for {\em reachability specifications} is fundamental in sequential decision-making, yet theoretical guarantees remain less explored. A recent work achieves {\em asymptotic convergence} to optimal policies. However, this approach provides limited insight into convergence dynamics. In this work, we present an alternative approach that provides deeper theoretical insights into convergence. Our approach builds on {\em PAC learning} with assumptions. PAC learning guarantees near-optimal policies with high confidence in finite time but requires knowing internal MDP parameters like minimum transition probability. We argue that while these parameters are unknown in RL, they can be iteratively refined and estimated with increasing accuracy. By iteratively satisfying PAC conditions, we show that exact optimality can be achieved in the limit. Empirical evaluations on standard benchmarks validate our theoretical insights into convergence dynamics.