Skip to yearly menu bar Skip to main content


( events)   Timezone:  
The 2021 schedule is still incomplete
Workshop
Fri Jul 23 05:45 AM -- 02:30 PM (PDT)
ICML 2021 Workshop on Unsupervised Reinforcement Learning
Feryal Behbahani · Joelle Pineau · Lerrel Pinto · Roberta Raileanu · Aravind Srinivas · Denis Yarats · Amy Zhang





Workshop Home Page

Unsupervised learning has begun to deliver on its promise in the recent past with tremendous progress made in the fields of natural language processing and computer vision whereby large scale unsupervised pre-training has enabled fine-tuning to downstream supervised learning tasks with limited labeled data. This is particularly encouraging and appealing in the context of reinforcement learning considering that it is expensive to perform rollouts in the real world with annotations either in the form of reward signals or human demonstrations. We therefore believe that a workshop in the intersection of unsupervised and reinforcement learning is timely and we hope to bring together researchers with diverse views on how to make further progress in this exciting and open-ended subfield.

Opening remarks
Invited Talk by David Ha (Invited talk)
Invited Talk by Alessandro Lazaric (Invited talk)
Invited Talk by Kelsey Allen (Invited talk)
Coffee break and Poster Session (Poster Session)
Invited Talk by Danijar Hafner (Invited talk)
Invited Talk by Nan Rosemary Ke (Invited talk)
Lunch and Poster Session (Poster session)
Oral Presentation: Discovering and Achieving Goals with World Models (Oral presentation)
Oral Presentation: Planning from Pixels in Environments with Combinatorially Hard Search Spaces (Oral Presentation)
Oral Presentation: Learning Task Agnostic Skills with Data-driven Guidance (Oral Presentation)
Invited Talk by Kianté Brantley (Invited talk)
Coffee break and Poster Session (Poster session)
Invited Talk by Chelsea Finn (Invited talk)
Invited Talk by Pieter Abbeel (Invited talk)
Panel Discussion
Intrinsic Control of Variational Beliefs in Dynamic Partially-Observed Visual Environments (Poster)
Data-Efficient Exploration with Self Play for Atari (Poster)
Learning to Represent State with Perceptual Schemata (Poster)
Exploration-Driven Representation Learning in Reinforcement Learning (Poster)
Reinforcement Learning as One Big Sequence Modeling Problem (Poster)
Episodic Memory for Subjective-Timescale Models (Poster)
Decision Transformer: Reinforcement Learning via Sequence Modeling (Poster)
Visual Adversarial Imitation Learning using Variational Models (Poster)
Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation (Poster)
CoBERL: Contrastive BERT for Reinforcement Learning (Poster)
SparseDice: Imitation Learning for Temporally Sparse Data via Regularization (Poster)
Representation Learning for Out-of-distribution Generalization in Downstream Tasks (Poster)
Tangent Space Least Adaptive Clustering (Poster)
Decoupling Exploration and Exploitation in Reinforcement Learning (Poster)
Disentangled Predictive Representation for Meta-Reinforcement Learning (Poster)
Learning Task Agnostic Skills with Data-driven Guidance (Poster)
Density-Based Bonuses on Learned Representations for Reward-Free Exploration in Deep Reinforcement Learning (Poster)
Beyond Fine-Tuning: Transferring Behavior in Reinforcement Learning (Poster)
Planning from Pixels in Environments with Combinatorially Hard Search Spaces (Poster)
Direct then Diffuse: Incremental Unsupervised Skill Discovery for State Covering and Goal Reaching (Poster)
The Importance of Non-Markovianity in Maximum State Entropy Exploration (Poster)
Learning to Explore Multiple Environments without Rewards (Poster)
Pretrained Encoders are All You Need (Poster)
Explore and Control with Adversarial Surprise (Poster)
Learning Task-Relevant Representations with Selective Contrast for Reinforcement Learning in a Real-World Application (Poster)
Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation (Poster)
Reward-Free Policy Space Compression for Reinforcement Learning (Poster)
Discovering and Achieving Goals with World Models (Poster)
Did I do that? Blame as a means to identify controlled effects in reinforcement learning (Poster)
Visualizing MuZero Models (Poster)
Exploration and preference satisfaction trade-off in reward-free learning (Poster)
Hierarchical Few-Shot Imitation with Skill Transition Models (Poster)
When Does Overconservatism Hurt Offline Learning? (Poster)
MASAI: Multi-agent Summative Assessment Improvement for Unsupervised Environment Design (Poster)
Exploration via Empowerment Gain: Combining Novelty, Surprise and Learning Progress (Poster)
Unsupervised Skill-Discovery and Skill-Learning in Minecraft (Poster)
Reward is enough for convex MDPs (Poster)
Discovering Diverse Nearly Optimal Policies with Successor Features (Poster)