Commit to the Bit: Reactive Reinforcement Learning Done Right
Onno Eberhard ⋅ Claire Vernade ⋅ Michael Muehlebach
Abstract
Theoretical properties of reinforcement learning algorithms are most commonly studied under the Markov assumption. This is unrealistic, as most environments encountered in practice are either partially observable, or require function approximation that restricts the agent to access non-Markovian state features. We consider the problem of learning an optimal reactive policy in a finite environment under deterministic observations (or equivalently, hard state aggregation). We introduce a new algorithm, _Committed Q-learning_, and prove almost sure convergence to the optimal reactive policy under an intuitive assumption we call _rewire-robustness_. This assumption is strictly weaker than the $q_\star$-realizability condition used in prior work. Our algorithm is a variant of classical Q-learning in which the behavior policy commits to a single action upon entering a feature, and only resamples actions when the observed feature changes.
Successful Page Load