Skip to yearly menu bar Skip to main content


Poster

Value-Evolutionary-Based Reinforcement Learning

Pengyi Li · Jianye Hao · Hongyao Tang · Yan Zheng · Fazl Barez

Hall C 4-9 #1417
[ ] [ Paper PDF ]
Tue 23 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract:

Combining Evolutionary Algorithms (EAs) and Reinforcement Learning (RL) for policy search has been proven to improve RL performance. However, previous works largely overlook value-based RL in favor of merging EAs with policy-based RL. This paper introduces Value-Evolutionary-Based Reinforcement Learning (VEB-RL) that focuses on the integration of EAs with value-based RL. The framework maintains a population of value functions instead of policies and leverages negative Temporal Difference error as the fitness metric for evolution. The metric is more sample-efficient for population evaluation than cumulative rewards and is closely associated with the accuracy of the value function approximation. Additionally, VEB-RL enables elites of the population to interact with the environment to offer high-quality samples for RL optimization, whereas the RL value function participates in the population's evolution in each generation. Experiments on MinAtar and Atari demonstrate the superiority of VEB-RL in significantly improving DQN, Rainbow, and SPR. Our code is available on https://github.com/yeshenpy/VEB-RL.

Chat is not available.