Skip to yearly menu bar Skip to main content


Poster

Rich-Observation Reinforcement Learning with Continuous Latent Dynamics

Yuda Song · Lili Wu · Dylan Foster · Akshay Krishnamurthy

Hall C 4-9 #1216
[ ] [ Project Page ]
Wed 24 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract:

Sample-efficiency and reliability remain major bottlenecks toward wide adoption of reinforcement learning algorithms in continuous settings with high-dimensional perceptual inputs. Toward addressing these challenges, we introduce a new theoretical framework, RichCLD (“Rich-Observation RL with Continuous Latent Dynamics”), in which the agent performs control based on high-dimensional observations, but the environment is governed by low-dimensional latent states and Lipschitz continuous dynamics. Our main contribution is a new algorithm for this setting that is provably statistically and computationally efficient. The core of our algorithm is a new representation learning objective; we show that prior representation learning schemes tailored to discrete dynamics do not naturally extend to the continuous setting. Our new objective is amenable to practical implementation, and empirically, we find that it compares favorably to prior schemes in a standard evaluation protocol. We further provide several insights into the statistical complexity of the RichCLD framework, in particular proving that certain notions of Lipschitzness that admit sample-efficient learning in the absence of rich observations are insufficient in the rich-observation setting.

Live content is unavailable. Log in and register to view live content