Refined Analysis of Entropy-Regularized Actor-Critic
Safwan Labbi ⋅ Paul Mangold ⋅ Daniil Tiapkin ⋅ Eric Moulines
Abstract
In this paper, we study the role of the critic in actor-critic for entropy-regularized, finite, discounted environments. We establish that, when the critic is exact, using the latter as a baseline is an actual variance-reduction method. In this case, actor-critic with stochastic gradients matches the sample complexity of deterministic policy gradient, reaching an $\epsilon$-optimal regularized value with $\tilde{O}(\log(1/\epsilon))$ samples. In practice, the critic is learned alongside the actor: the variance of the actor update is then influenced by the critic's variance and bias. Specifically, when the critic has a sufficiently small error, the variance reduction and rapid convergence are preserved. This suggests to learn the critic first, keeping it up to date after each actor update, underscoring the pivotal role of accurate critic estimation in actor-critic methods.
Successful Page Load