Skip to yearly menu bar Skip to main content


Poster

Learning Low-dimensional Latent Dynamics from High-dimensional Observations: Non-asymptotics and Lower Bounds

Yuyang Zhang · Shahriar Talebi · Na Li


Abstract: In this paper, we focus on learning a linear time-invariant (LTI) model with low-dimensional latent variables but high-dimensional observations. We provide an algorithm that recovers the high-dimensional features, i.e. column space of the observer, embeds the data into low dimensions and learns the low-dimensional model parameters. Our algorithm enjoys a complexity guarantee of order $\tilde{\mathcal{O}}(n/\epsilon^2)$, where $n$ is the observation dimension. We further establish a fundamental lower bound indicating the optimality of this complexity up to logarithmic factors and dimension-independent constants. We show that this inevitable linear factor of $n$ is reflecting the learning error of the observer's column space in the presence of high-dimensional noise. Extending our results, we consider a meta-learning problem inspired by various real-world applications, where the observer column space can be collectively learned from datasets of multiple similar LTI systems. An end-to-end algorithm is then proposed, facilitating learning similar LTI systems from a meta-dataset which breaks the sample complexity lower bound in certain scenarios.

Live content is unavailable. Log in and register to view live content