Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling

Decision Stacks: Flexible Reinforcement Learning via Modular Generative Models

Siyan Zhao · Aditya Grover

Keywords: [ Generative Models ] [ Reinforcement Learning ] [ offline RL ] [ modularity ] [ sequential decision making ]


Abstract:

Reinforcement learning presents an attractive paradigm to reason about several distinct aspects of sequential decision making, such as specifying complex goals, planning future observations and actions, and critiquing their utilities, demanding a balance between expressivity and flexible modeling for efficient learning and inference. We present Decision Stacks, a probabilistic generative framework that decomposes goal-conditioned policy agents into 3 generative modules which simulate the temporal evolution of observations, rewards, and actions. Our framework guarantees both expressivity and flexibility in designing in- dividual modules to account for key factors such as architectural bias, optimization objective and dynamics, transferability across domains, and in- ference speed. Our empirical results demonstrate the effectiveness of Decision Stacks for offline policy optimization for several MDP and POMDP environments.

Chat is not available.