Skip to yearly menu bar Skip to main content


Oral

Efficient RL via Disentangled Environment and Agent Representations

Kevin Gmelin · Shikhar Bahl · Russell Mendonca · Deepak Pathak

Ballroom B
[ ] [ Livestream: Visit Oral A5 Reinforcement Learning 1 ]
[ PDF

Abstract:

Agents that are aware of the separation between the environments and themselves can leverage this understanding to form effective representations of visual input. We propose an approach for learning such structured representations for RL algorithms, using visual knowledge of the agent, which is often inexpensive to obtain, such as its shape or mask. This is incorporated into the RL objective using a simple auxiliary loss. We show that our method, SEAR (Structured Environment-Agent Representations), outperforms state-of-the-art model-free approaches over 18 different challenging visual simulation environments spanning 5 different robots.

Chat is not available.