Skip to yearly menu bar Skip to main content


Poster

Generalization to New Actions in Reinforcement Learning

Ayush Jain · Andrew Szot · Joseph Lim

Virtual

Keywords: [ Reinforcement Learning - Deep RL ] [ Unsupervised Learning ] [ Transfer and Multitask Learning ] [ Representation Learning ] [ Deep Reinforcement Learning ]


Abstract:

A fundamental trait of intelligence is the ability to achieve goals in the face of novel circumstances, such as making decisions from new action choices. However, standard reinforcement learning assumes a fixed set of actions and requires expensive retraining when given a new action set. To make learning agents more adaptable, we introduce the problem of zero-shot generalization to new actions. We propose a two-stage framework where the agent first infers action representations from action information acquired separately from the task. A policy flexible to varying action sets is then trained with generalization objectives. We benchmark generalization on sequential tasks, such as selecting from an unseen tool-set to solve physical reasoning puzzles and stacking towers with novel 3D shapes. Videos and code are available at https://sites.google.com/view/action-generalization.

Chat is not available.