Timezone: »

 
Poster
Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning
Luisa Zintgraf · Leo Feng · Cong Lu · Maximilian Igl · Kristian Hartikainen · Katja Hofmann · Shimon Whiteson

Thu Jul 22 09:00 PM -- 11:00 PM (PDT) @ Virtual

To rapidly learn a new task, it is often essential for agents to explore efficiently - especially when performance matters from the first timestep. One way to learn such behaviour is via meta-learning. Many existing methods however rely on dense rewards for meta-training, and can fail catastrophically if the rewards are sparse. Without a suitable reward signal, the need for exploration during meta-training is exacerbated. To address this, we propose HyperX, which uses novel reward bonuses for meta-training to explore in approximate hyper-state space (where hyper-states represent the environment state and the agent's task belief). We show empirically that HyperX meta-learns better task-exploration and adapts more successfully to new tasks than existing methods.

Author Information

Luisa Zintgraf (University of Oxford)
Leo Feng (Mila)
Cong Lu (University of Oxford)
Maximilian Igl (University of Oxford)
Kristian Hartikainen (UC Berkeley)
Katja Hofmann (Microsoft)
Shimon Whiteson (University of Oxford)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors