Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Foundations of Reinforcement Learning and Control: Connections and Perspectives

Non-Linear $H_\infty$ Robustness Guarantees for Neural Network Policies

Daniel Urieli


Abstract:

Robust control methods ensure system stability under disturbances but often fall short in performance when applied to non-linear systems. Neural-network based control methods trained using deep reinforcement learning (RL) have achieved state-of-the-art performance on many challenging non-linear tasks but often lack robustness guarantees. Prior work proposed a method to enforce robust control guarantees within neural network policies, improving average-case performance over existing robust control methods and worst-case stability over deep RL methods. However, this method assumed linear time-invariant dynamics, which restricts the allowable actions and reduces the flexibility of neural network policies in handling non-linear dynamics. This paper presents a novel approach to enforce non-linear (H_{\infty}) robustness guarantees for neural network policies, as well as a tunable robustness parameter that allows for a trading off robustness and average performance, which is an essential feature for real-world deployments. Although the experimental validation of our approach is still ongoing, we believe that the theoretical foundations presented here advance us towards the deployment of robust neural network policies in practical applications, by offering a comprehensive solution for enhancing performance and robustness in non-linear dynamic systems.

Chat is not available.