Emergence of Exploration in Policy Gradient Reinforcement Learning via Retrying
Soichiro Nishimori ⋅ Paavo Parmas ⋅ Sotetsu Koyamada ⋅ Tadashi Kozuno ⋅ Toshinori Kitamura ⋅ Shin Ishii ⋅ Yutaka Matsuo
Abstract
In reinforcement learning (RL), agents benefit from exploration *only* because they repeatedly encounter similar states: trying different actions can improve performance or reduce uncertainty; without such retries, a greedy policy is optimal. We formalize this intuition with **ReMax**, an objective that evaluates a policy by the expected maximum return over $M$ samples ($M \in \mathbb{N}$), while accounting for return uncertainty. Optimizing this objective induces stochastic exploration as an emergent property, without explicit bonus terms. For efficient policy optimization, we derive a new policy-gradient formulation for ReMax and introduce **Re**Max **PPO** (**RePPO**), a PPO variant that optimizes ReMax while generalizing the discrete retry count $M$ to a continuous parameter $m > 0$, enabling fine-grained control of exploration. Empirically, RePPO promotes exploration—without any explicit exploration bonuses—on the MinAtar and Craftax benchmarks.
Successful Page Load