Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Duality Principles for Modern Machine Learning

Reward-Based Reinforcement Learning with Risk Constraints

Jane Lee · Konstantinos Nikolakakis · Dionysios Kalogerias · Amin Karbasi

Keywords: [ constrained RL ] [ risk measures ] [ duality ] [ reward-based RL ]


Abstract:

Constrained optimization provides a common framework and solvers for dealing with conflicting objectives in the context of reinforcement learning (RL). In most of these settings, the objectives (and constraints) are expressed though the expected accumulated reward. However, this formulation neglects possibly catastrophic events at the tails of the distribution, and it is often insufficient for many high-stake applications in which the risk involved in outliers is critical. In this work, we propose a new framework for risk-aware reinforcement learning which handles reward-based risk measures in both the objective and constraints which exhibits robustness properties jointly in time and space. The derivation makes connections between the convex analytic duality between return-based and reward-based formulations of RL and with Lagrangian duality, and can easily be implemented on top of existing RL algorithms. Furthermore, unlike the Lagrangian relaxations done in prior risk-constrained works, our framework admits an exact equivalence to the primal problem through a parameterized strong duality. Finally, we provide convergence guarantees of the proposed algorithm under common assumptions on the objective.

Chat is not available.