Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Reinforcement Learning for Real Life

Automating Power Networks: Improving RL Agent Robustness with Adversarial Training

Alexander Pan · Yongkyun Lee · Huan Zhang


Abstract:

As global demand for electricity increases, operating power networks has become more complex. Power network operation can be posed as a reinforcement learning (RL) task, and there is increasing interest in developing RL agents that can automate operation. The Learning To Run Power Network (L2RPN) environment models a real-world electric grid, posing as a test bed for these RL agents. Agents must be robust, i.e., ensure reliable electricity flow even when some power lines are disconnected. Because of the large state and action space of power grids, robustness is hard to achieve and has become a key technical obstacle in widespread adoption of RL for power networks. To improve the robustness of L2RPN agents, we propose adversarial training. We make the following contributions: 1) we design an agent-specific \emph{adversary MDP} to train an adversary that minimizes a given agent's reward; 2) we demonstrate the potency of our adversarial policies against winning agent policies from the L2RPN challenge; 3) we improve the robustness of a winning L2RPN agent by adversarially training it against our learned adversary. To the best of our knowledge, we provide the first evidence that learned adversaries for power network agents are potent. We also demonstrate a novel, real-world application of adversarial training: improving the robustness of RL agents for power networks.

Chat is not available.