Timezone: »

 
Oral
Smoothed Action Value Functions for Learning Gaussian Policies
Ofir Nachum · Mohammad Norouzi · George Tucker · Dale Schuurmans

Thu Jul 12 05:50 AM -- 06:10 AM (PDT) @ A1

State-action value functions (i.e., Q-values) are ubiquitous in reinforcement learning (RL), giving rise to popular algorithms such as SARSA and Q-learning. We propose a new notion of action value defined by a Gaussian smoothed version of the expected Q-value. We show that such smoothed Q-values still satisfy a Bellman equation, making them learnable from experience sampled from an environment. Moreover, the gradients of expected reward with respect to the mean and covariance of a parameterized Gaussian policy can be recovered from the gradient and Hessian of the smoothed Q-value function. Based on these relationships we develop new algorithms for training a Gaussian policy directly from a learned smoothed Q-value approximator. The approach is additionally amenable to proximal optimization by augmenting the objective with a penalty on KL-divergence from a previous policy. We find that the ability to learn both a mean and covariance during training leads to significantly improved results on standard continuous control benchmarks.

Author Information

Ofir Nachum (Google Brain)
Mohammad Norouzi (Google Brain)
George Tucker (Google Brain)
Dale Schuurmans (University of Alberta)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors