QPKO: Differentiable QP-Embedded Deep Koopman Framework for Modeling Nonlinear Systems
Abstract
Deep learning has been widely regarded as a powerful tool for Koopman operator theory-based modeling, as it provides a promising architecture for data-driven learning of observable functions. To fully leverage this advantage, a well-designed training paradigm is required. However, the existing training paradigms typically either incur high optimization complexity or hinder effective end-to-end training, limiting modeling accuracy and training efficiency. To address this issue, we propose a differentiable quadratic programming (QP)-embedded deep Koopman framework (QPKO). In QPKO, a QP problem, which comprises a one-step accuracy-oriented objective function and a set of multi-step accuracy-oriented constraints, is formulated to introduce a mapping from observable functions to the global linear model. By doing so, the global linear model no longer needs to be treated as an independent trainable component, thereby effectively reducing optimization complexity. This QP-based mapping is implemented as a differentiable and computationally efficient module by leveraging OptNet (a differentiable QP layer), enabling effective end-to-end training. Experiments on four nonlinear dynamical systems show that QPKO achieves satisfactory improvements in modeling accuracy, training efficiency, and control performance.