Timezone: »

Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning
Tongzhou Wang · Antonio Torralba · Phillip Isola · Amy Zhang

Thu Jul 27 04:30 PM -- 06:00 PM (PDT) @ Exhibit Hall 1 #427

In goal-reaching reinforcement learning (RL), the optimal value function has a particular geometry, called quasimetrics structure. This paper introduces Quasimetric Reinforcement Learning (QRL), a new RL method that utilizes quasimetric models to learn optimal value functions. Distinct from prior approaches, the QRL objective is specifically designed for quasimetrics, and provides strong theoretical recovery guarantees. Empirically, we conduct thorough analyses on a discretized MountainCar environment, identifying properties of QRL and its advantages over alternatives. On offline and online goal-reaching benchmarks, QRL also demonstrates improved sample efficiency and performance, across both state-based and image-based observations.

Author Information

Tongzhou Wang (MIT)
Antonio Torralba (MIT)
Phillip Isola (MIT)
Amy Zhang (UT Austin / FAIR)

More from the Same Authors