Timezone: »

 
Poster
VA-learning as a more efficient alternative to Q-learning
Yunhao Tang · Remi Munos · Mark Rowland · Michal Valko

Tue Jul 25 02:00 PM -- 04:30 PM (PDT) @ Exhibit Hall 1 #431

In reinforcement learning, the advantage function is critical for policy improvement, but is often extracted from a learned Q-function. A natural question is: Why not learn the advantage function directly? In this work, we introduce VA-learning, which directly learns advantage function and value function using bootstrapping, without explicit reference to Q-functions. VA-learning learns off-policy and enjoys similar theoretical guarantees as Q-learning. Thanks to the direct learning of advantage function and value function, VA-learning improves the sample efficiency over Q-learning both in tabular implementations and deep RL agents on Atari-57 games. We also identify a close connection between VA-learning and the dueling architecture, which partially explains why a simple architectural change to DQN agents tends to improve performance.

Author Information

Yunhao Tang (Google DeepMind)
Remi Munos (DeepMind)
Mark Rowland (Google DeepMind)
Michal Valko (Google DeepMind / Inria / MVA)

More from the Same Authors