Improved Bounds for Reward-Agnostic and Reward-Free Exploration
Oran Ridel ⋅ Alon Peled-Cohen
Abstract
We study *reward-free* and *reward-agnostic* exploration in episodic finite-horizon Markov decision processes (MDPs), where an agent explores an unknown environment without observing external rewards. Reward-free exploration aims to enable $\epsilon$-optimal policies for *any* reward revealed after exploration, while reward-agnostic exploration targets $\epsilon$-optimality for rewards drawn from a small finite class. In the *reward-agnostic setting*, Li, Yan, Chen, and Fan (2024) achieve minimax sample complexity, but only for restrictively small accuracy parameter $\epsilon$. We propose a new algorithm that significantly relaxes the requirement on $\epsilon$. Our approach is novel and of technical interest by itself. Our algorithm employs an online learning procedure with carefully designed rewards to construct an exploration policy, which is used to gather data sufficient for accurate dynamics estimation and subsequent computation of an $\epsilon$-optimal policy once the reward is revealed. Finally, we establish a tight lower bound for *reward-free exploration*, closing the gap between known upper and lower bounds.
Successful Page Load