Timezone: »

Estimating Q(s,s') with Deep Deterministic Dynamics Gradients
Ashley Edwards · Himanshu Sahni · Rosanne Liu · Jane Hung · Ankit Jain · Rui Wang · Adrien Ecoffet · Thomas Miconi · Charles Isbell · Jason Yosinski

Thu Jul 16 06:00 AM -- 06:45 AM & Thu Jul 16 05:00 PM -- 05:45 PM (PDT) @
In this paper, we introduce a novel form of value function, $Q(s, s')$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s'$ and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. This formulation decouples actions from values while still learning off-policy. We highlight the benefits of this approach in terms of value function transfer, learning within redundant action spaces, and learning off-policy from state observations generated by sub-optimal or completely random policies. Code and videos are available at http://sites.google.com/view/qss-paper.

Author Information

Ashley Edwards (Uber AI)
Himanshu Sahni (Georgia Institute of Technology)
Rosanne Liu (ML Collective)
Jane Hung (Uber)
Ankit Jain (Uber AI)
Rui Wang (Uber AI)
Adrien Ecoffet (OpenAI)
Thomas Miconi (Uber AI Labs)
Charles Isbell (Georgia Institute of Technology)
Jason Yosinski (Deep Collective)

More from the Same Authors