Timezone: »
In this paper, we address the discovery of robotic options from demonstrations in an unsupervised manner. Specifically, we present a framework to jointly learn low-level control policies and higher-level policies of how to use them from demonstrations of a robot performing various tasks. By representing options as continuous latent variables, we frame the problem of learning these options as latent variable inference. We then present a temporally causal variant of variational inference based on a temporal factorization of trajectory likelihoods, that allows us to infer options in an unsupervised manner. We demonstrate the ability of our framework to learn such options across three robotic demonstration datasets.
Author Information
Tanmay Shankar (Facebook AI Research)
Abhinav Gupta (Carnegie Mellon University)
More from the Same Authors
-
2020 : Neural Dynamic Policies for End-to-End Sensorimotor Learning »
Abhinav Gupta -
2021 Poster: PixelTransformer: Sample Conditioned Signal Generation »
Shubham Tulsiani · Abhinav Gupta -
2021 Spotlight: PixelTransformer: Sample Conditioned Signal Generation »
Shubham Tulsiani · Abhinav Gupta -
2019 Poster: Self-Supervised Exploration via Disagreement »
Deepak Pathak · Dhiraj Gandhi · Abhinav Gupta -
2019 Oral: Self-Supervised Exploration via Disagreement »
Deepak Pathak · Dhiraj Gandhi · Abhinav Gupta -
2017 Poster: Robust Adversarial Reinforcement Learning »
Lerrel Pinto · James Davidson · Rahul Sukthankar · Abhinav Gupta -
2017 Talk: Robust Adversarial Reinforcement Learning »
Lerrel Pinto · James Davidson · Rahul Sukthankar · Abhinav Gupta