Skip to yearly menu bar Skip to main content


Poster

Efficient Value Iteration for s-rectangular Robust Markov Decision Processes

Navdeep Kumar · Kaixin Wang · Kfir Levy · Shie Mannor

Hall C 4-9 #1205
[ ] [ Paper PDF ]
Wed 24 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract:

We focus on s-rectangular robust Markov decision processes (MDPs), which capture interconnected uncertainties across different actions within each state. This framework is more general compared to sa-rectangular robust MDPs, where uncertainties in each action are independent. However, the introduced interdependence significantly amplifies the complexity of the problem. Existing methods either have slow performance guarantees or are inapplicable to even moderately large state spaces. In this work, we derive optimal robust Bellman operators in explicit forms. This leads to robust value iteration methods with significantly faster time complexities than existing approaches, which can be used in large state spaces. Further, our findings reveal that the optimal policies demonstrate a novel threshold behavior, selectively favoring a limited set of actions based on their respective advantage functions. Additionally, our study uncovers a noteworthy connection between the robustness of a policy and the variance in its value function, highlighting that policies with lower variance exhibit greater resilience.

Chat is not available.