Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning

Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations

Yongyuan Liang · Yanchao Sun · Ruijie Zheng · Xiangyu Liu · Tuomas Sandholm · Furong Huang · Stephen Mcaleer

Keywords: [ robustness ] [ Reinforcement Learning ] [ Adversarial Learning ]


Abstract:

Robust reinforcement learning (RL) seeks to train policies that can perform well under environment perturbations or adversarial attacks. Existing approaches typically assume that the space of possible perturbations remains the same across timesteps. However, in many settings, the space of possible perturbations at a given timestep depends on past perturbations. We formally introduce temporally-coupled perturbations, presenting a novel challenge for existing robust RL methods. To tackle this challenge, we propose GRAD, a novel game-theoretic approach that treats the temporally-coupled robust RL problem as a partially-observable two-player zero-sum game. By finding an approximate equilibrium in this game, GRAD ensures the agent's robustness against temporally-coupled perturbations. Empirical experiments on a variety of continuous control tasks demonstrate that our proposed approach exhibits significant robustness advantages compared to baselines against both standard and temporally-coupled attacks, in both state and action spaces.

Chat is not available.