Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Foundations of Reinforcement Learning and Control: Connections and Perspectives

Safe online nonstochastic control from data

Sebastian Kerz · Armin Lederer · Marion Leibold · Dirk Wollherr


Abstract:

Online nonstochastic control has emerged as a promising strategy for online convex optimization of control policies for linear systems subject to adversarial disturbances and time-varying cost functions. However, ensuring safety in these systems remains a significant open problem, especially when the system parameters are unknown. Practical nonstochastic control algorithms for real-world systems must adhere to safety constraints without becoming overly conservative or relying on exact models. We address this challenge by presenting a safe nonstochastic control algorithm for systems with unknown parameters subject to state and input constraints. Given data of a single disturbed input-state trajectory, we design non-conservative constraint sets for the policy parameters and develop a robust strongly stabilizing controller. By drawing a connection to model predictive control, we propose a new analysis perspective and show how a slight change in the nonstochastic control algorithm can drastically improve performance if disturbances are constant or slowly time-varying.

Chat is not available.