Skip to yearly menu bar Skip to main content


Extracting Latent State Representations with Linear Dynamics from Rich Observations

Abraham Frandsen · Rong Ge · Holden Lee

Hall E #1117

Keywords: [ OPT: Non-Convex ] [ MISC: Representation Learning ] [ T: Reinforcement Learning and Planning ]


Recently, many reinforcement learning techniques have been shown to have provable guarantees in the simple case of linear dynamics, especially in problems like linear quadratic regulators. However, in practice many tasks require learning a policy from rich, high-dimensional features such as images, which are unlikely to be linear. We consider a setting where there is a hidden linear subspace of the high-dimensional feature space in which the dynamics are linear. We design natural objectives based on forward and inverse dynamics models. We prove that these objectives can be efficiently optimized and their local optimizers extract the hidden linear subspace. We empirically verify our theoretical results with synthetic data and explore the effectiveness of our approach (generalized to nonlinear settings) in simple control tasks with rich observations.

Chat is not available.