Timezone: »

Generative Modeling and Model-Based Reasoning for Robotics and AI
Aravind Rajeswaran · Emanuel Todorov · Igor Mordatch · William Agnew · Amy Zhang · Joelle Pineau · Michael Chang · Dumitru Erhan · Sergey Levine · Kimberly Stachenfeld · Marvin Zhang

Fri Jun 14 08:30 AM -- 06:00 PM (PDT) @ Hall A
Event URL: https://sites.google.com/view/mbrl-icml2019 »

Workshop website: https://sites.google.com/view/mbrl-icml2019

In the recent explosion of interest in deep RL, “model-free” approaches based on Q-learning and actor-critic architectures have received the most attention due to their flexibility and ease of use. However, this generality often comes at the expense of efficiency (statistical as well as computational) and robustness. The large number of required samples and safety concerns often limit direct use of model-free RL for real-world settings.

Model-based methods are expected to be more efficient. Given accurate models, trajectory optimization and Monte-Carlo planning methods can efficiently compute near-optimal actions in varied contexts. Advances in generative modeling, unsupervised, and self-supervised learning provide methods for learning models and representations that support subsequent planning and reasoning. Against this backdrop, our workshop aims to bring together researchers in generative modeling and model-based control to discuss research questions at their intersection, and to advance the state of the art in model-based RL for robotics and AI. In particular, this workshops aims to make progress on questions related to:

1. How can we learn generative models efficiently? Role of data, structures, priors, and uncertainty.
2. How to use generative models efficiently for planning and reasoning? Role of derivatives, sampling, hierarchies, uncertainty, counterfactual reasoning etc.
3. How to harmoniously integrate model-learning and model-based decision making?
4. How can we learn compositional structure and environmental constraints? Can this be leveraged for better generalization and reasoning?

Author Information

Aravind Rajeswaran (University of Washington)
Emanuel Todorov (University of Washington)
Igor Mordatch (OpenAI)
William Agnew (University of Washington)
Amy Zhang (McGill University)
Joelle Pineau (McGill University / Facebook)
Michael Chang (UC Berkeley)
Dumitru Erhan (Google Brain)
Sergey Levine (UC Berkeley)
Sergey Levine

Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work focuses on machine learning for decision making and control, with an emphasis on deep learning and reinforcement learning algorithms. Applications of his work include autonomous robots and vehicles, as well as computer vision and graphics. His research includes developing algorithms for end-to-end training of deep neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, deep reinforcement learning algorithms, and more.

Kimberly Stachenfeld (Google)
Marvin Zhang (UC Berkeley)

More from the Same Authors