Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Foundations of Reinforcement Learning and Control: Connections and Perspectives

Online Performance Optimization of Nonlinear Systems: A Gray-Box Approach

Zhiyu He · Michael Muehlebach · Saverio Bolognani · Florian Dörfler


Abstract:

We propose a gray-box controller to optimize the performance of a nonlinear system in an online manner. This is motivated by the observation that model-based and model-free approaches own complementary benefits in sample efficiency and optimality in the presence of inaccurate models. To achieve the best of both worlds, our controller incorporates approximate model information into model-free updates via adaptive convex combinations. Further, it leverages real-time outputs of the system and iteratively adjusts control inputs. We quantify conditions on the quality of approximate models that render the gray-box approach preferable to model-based or model-free approaches. We characterize the performance of our controller via dynamic regret in a constrained, time-varying setting, and highlight how the regret scales with the number of iterations, the problem dimension, and the cumulative effect of inaccurate models.

Chat is not available.