Timezone: »

 
Poster
Model-Based Reinforcement Learning via Latent-Space Collocation
Oleh Rybkin · Chuning Zhu · Anusha Nagabandi · Kostas Daniilidis · Igor Mordatch · Sergey Levine

Tue Jul 20 09:00 AM -- 11:00 AM (PDT) @ None #None

The ability to plan into the future while utilizing only raw high-dimensional observations, such as images, can provide autonomous agents with broad and general capabilities. However, realistic tasks require performing temporally extended reasoning, and cannot be solved with only myopic, short-sighted planning. Recent work in model-based reinforcement learning (RL) has shown impressive results on tasks that require only short-horizon reasoning. In this work, we study how the long-horizon planning abilities can be improved with an algorithm that optimizes over sequences of states, rather than actions, which allows better credit assignment. To achieve this, we draw on the idea of collocation and adapt it to the image-based setting by leveraging probabilistic latent variable models, resulting in an algorithm that optimizes trajectories over latent variables. Our latent collocation method (LatCo) provides a general and effective visual planning approach, and significantly outperforms prior model-based approaches on challenging visual control tasks with sparse rewards and long-term goals. See the videos on the supplementary website \url{https://sites.google.com/view/latco-mbrl/.}

Author Information

Oleg Rybkin (University of Pennsylvania)

Oleg is a Ph.D. student in the GRASP laboratory at the University of Pennsylvania advised by Kostas Daniilidis. He received his Bachelor's degree from Czech Technical University in Prague. He is interested in deep learning and computer vision, and, more specifically, on using deep predictive models to discover semantic structure in video as well as applications of these models for planning. Prior to his Ph.D. studies, he worked on camera geometry as an undergraduate researcher advised by Tomas Pajdla. He was a visiting student researcher at INRIA advised by Josef Sivic, Tokyo Institute of Technology advised by Akihiko Torii, and UC Berkeley advised by Sergey Levine.

Chuning Zhu (University of Pennsylvania)
Anusha Nagabandi (UC Berkeley)
Kostas Daniilidis (University of Pennsylvania)
Igor Mordatch (Google Brain)
Sergey Levine (UC Berkeley)
Sergey Levine

Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors