Skip to yearly menu bar Skip to main content


Poster

RVI-SAC: Average Reward Off-Policy Deep Reinforcement Learning

Yukinari Hisaki · Isao Ono

Hall C 4-9 #1312
[ ] [ Paper PDF ]
[ Slides
Wed 24 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract:

In this paper, we propose an off-policy deep reinforcement learning (DRL) method utilizing the average reward criterion. While most existing DRL methods employ the discounted reward criterion, this can potentially lead to a discrepancy between the training objective and performance metrics in continuing tasks, making the average reward criterion a recommended alternative. We introduce RVI-SAC, an extension of the state-of-the-art off-policy DRL method, Soft Actor-Critic (SAC), to the average reward criterion. Our proposal consists of (1) Critic updates based on RVI Q-learning, (2) Actor updates introduced by the average reward soft policy improvement theorem, and (3) automatic adjustment of Reset Cost enabling the average reward reinforcement learning to be applied to tasks with termination. We apply our method to the Gymnasium's Mujoco tasks, a subset of locomotion tasks, and demonstrate that RVI-SAC shows competitive performance compared to existing methods.

Chat is not available.