Timezone: »
Reinforcement learning (RL) has been successfully used to solve many continuous control tasks. Despite its impressive results however, fundamental questions regarding the sample complexity of RL on continuous problems remain open. We study the performance of RL in this setting by considering the behavior of the Least-Squares Temporal Difference (LSTD) estimator on the classic Linear Quadratic Regulator (LQR) problem from optimal control. We give the first finite-time analysis of the number of samples needed to estimate the value function for a fixed static state-feedback policy to within epsilon-relative error. In the process of deriving our result, we give a general characterization for when the minimum eigenvalue of the empirical covariance matrix formed along the sample path of a fast-mixing stochastic process concentrates above zero, extending a result by Koltchinskii and Mendelson in the independent covariates setting. Finally, we provide experimental evidence indicating that our analysis correctly captures the qualitative behavior of LSTD on several LQR instances.
Author Information
Stephen Tu (UC Berkeley)
Benjamin Recht (Berkeley)
Benjamin Recht is an Associate Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Ben's research group studies the theory and practice of optimization algorithms with a focus on applications in machine learning, data analysis, and controls. Ben is the recipient of a Presidential Early Career Awards for Scientists and Engineers, an Alfred P. Sloan Research Fellowship, the 2012 SIAM/MOS Lagrange Prize in Continuous Optimization, the 2014 Jamon Prize, the 2015 William O. Baker Award for Initiatives in Research, and the 2017 NIPS Test of Time Award.
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Least-Squares Temporal Difference Learning for the Linear Quadratic Regulator »
Wed. Jul 11th 04:15 -- 07:00 PM Room Hall B #104
More from the Same Authors
-
2021 Poster: Quantifying Availability and Discovery in Recommender Systems via Stochastic Reachability »
Mihaela Curmei · Sarah Dean · Benjamin Recht -
2021 Spotlight: Quantifying Availability and Discovery in Recommender Systems via Stochastic Reachability »
Mihaela Curmei · Sarah Dean · Benjamin Recht -
2021 Poster: Representation Matters: Assessing the Importance of Subgroup Allocations in Training Data »
Esther Rolf · Theodora Worledge · Benjamin Recht · Michael Jordan -
2021 Spotlight: Representation Matters: Assessing the Importance of Subgroup Allocations in Training Data »
Esther Rolf · Theodora Worledge · Benjamin Recht · Michael Jordan -
2020 Poster: Neural Kernels Without Tangents »
Vaishaal Shankar · Alex Fang · Wenshuo Guo · Sara Fridovich-Keil · Jonathan Ragan-Kelley · Ludwig Schmidt · Benjamin Recht -
2020 Poster: Evaluating Machine Accuracy on ImageNet »
Vaishaal Shankar · Rebecca Roelofs · Horia Mania · Alex Fang · Benjamin Recht · Ludwig Schmidt -
2020 Poster: The Effect of Natural Distribution Shift on Question Answering Models »
John Miller · Karl Krauth · Benjamin Recht · Ludwig Schmidt -
2019 Poster: Do ImageNet Classifiers Generalize to ImageNet? »
Benjamin Recht · Rebecca Roelofs · Ludwig Schmidt · Vaishaal Shankar -
2019 Oral: Do ImageNet Classifiers Generalize to ImageNet? »
Benjamin Recht · Rebecca Roelofs · Ludwig Schmidt · Vaishaal Shankar -
2018 Tutorial: Optimization Perspectives on Learning to Control »
Benjamin Recht -
2017 Poster: Breaking Locality Accelerates Block Gauss-Seidel »
Stephen Tu · Shivaram Venkataraman · Ashia Wilson · Alex Gittens · Michael Jordan · Benjamin Recht -
2017 Talk: Breaking Locality Accelerates Block Gauss-Seidel »
Stephen Tu · Shivaram Venkataraman · Ashia Wilson · Alex Gittens · Michael Jordan · Benjamin Recht