Timezone: »
We study a security threat to reinforcement learning where an attacker poisons the learning environment to force the agent into executing a target policy chosen by the attacker. As a victim, we consider RL agents whose objective is to find a policy that maximizes average reward in undiscounted infinite-horizon problem settings. The attacker can manipulate the rewards or the transition dynamics in the learning environment at training-time and is interested in doing so in a stealthy manner. We propose an optimization framework for finding an \emph{optimal stealthy attack} for different measures of attack cost. We provide sufficient technical conditions under which the attack is feasible and provide lower/upper bounds on the attack cost. We instantiate our attacks in two settings: (i) an \emph{offline} setting where the agent is doing planning in the poisoned environment, and (ii) an \emph{online} setting where the agent is learning a policy using a regret-minimization framework with poisoned feedback. Our results show that the attacker can easily succeed in teaching any target policy to the victim under mild conditions and highlight a significant security threat to reinforcement learning agents in practice.
Author Information
Amin Rakhsha (Max Planck Institute for Software Systems (MPI-SWS))
Goran Radanovic (Max Planck Institute for Software Systems)
Rati Devidze (Max Planck Institute for Software Systems)
Jerry Zhu (University of Wisconsin-Madison)
Adish Singla (Max Planck Institute (MPI-SWS))
More from the Same Authors
-
2021 : Corruption Robust Offline Reinforcement Learning »
Xuezhou Zhang · Yiding Chen · Jerry Zhu · Wen Sun -
2022 Poster: Out-of-Distribution Detection with Deep Nearest Neighbors »
Yiyou Sun · Yifei Ming · Jerry Zhu · Yixuan Li -
2022 Spotlight: Out-of-Distribution Detection with Deep Nearest Neighbors »
Yiyou Sun · Yifei Ming · Jerry Zhu · Yixuan Li -
2021 : Poster spotlight presentations 2 »
Sebastian Tschiatschek · Adish Singla · Besmira Nushi -
2021 : Poster spotlight presentations 1 »
Sebastian Tschiatschek · Adish Singla · Besmira Nushi -
2021 Workshop: Human-AI Collaboration in Sequential Decision-Making »
Besmira Nushi · Adish Singla · Sebastian Tschiatschek -
2021 Poster: Robust Policy Gradient against Strong Data Corruption »
Xuezhou Zhang · Yiding Chen · Jerry Zhu · Wen Sun -
2021 Spotlight: Robust Policy Gradient against Strong Data Corruption »
Xuezhou Zhang · Yiding Chen · Jerry Zhu · Wen Sun -
2020 Workshop: Incentives in Machine Learning »
Boi Faltings · Yang Liu · David Parkes · Goran Radanovic · Dawn Song -
2020 Poster: Adaptive Reward-Poisoning Attacks against Reinforcement Learning »
Xuezhou Zhang · Yuzhe Ma · Adish Singla · Jerry Zhu -
2019 Poster: Efficient learning of smooth probability functions from Bernoulli tests with guarantees »
Paul Rolland · Ali Kavis · Alexander Niklaus Immer · Adish Singla · Volkan Cevher -
2019 Oral: Efficient learning of smooth probability functions from Bernoulli tests with guarantees »
Paul Rolland · Ali Kavis · Alexander Niklaus Immer · Adish Singla · Volkan Cevher -
2019 Poster: Learning to Collaborate in Markov Decision Processes »
Goran Radanovic · Rati Devidze · David Parkes · Adish Singla -
2019 Poster: Teaching a black-box learner »
Sanjoy Dasgupta · Daniel Hsu · Stefanos Poulis · Jerry Zhu -
2019 Oral: Learning to Collaborate in Markov Decision Processes »
Goran Radanovic · Rati Devidze · David Parkes · Adish Singla -
2019 Oral: Teaching a black-box learner »
Sanjoy Dasgupta · Daniel Hsu · Stefanos Poulis · Jerry Zhu