Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Continuous Time Perspectives in Machine Learning

Continuous-time Analysis for Variational Inequalities: An Overview & Desiderata

Tatjana Chavdarova · Ya-Ping Hsieh


Abstract:

The optimization of zero-sum games, multi-objective agent training, or in general, the optimization of variational inequality (VI) problems is currently notoriously unstable on general problems. Owing to the increased need for training such models in machine learning, the above observation attracted significant research attention over the past years. Substantial progress has been made towards understanding the qualitative differences with single-objective minimization by casting the optimization method in its corresponding continuous-time dynamics, as well as obtaining convergence guarantees and rates for some instances of VIs because such guarantees often guide the corresponding proof for the discrete counterpart. Most notably, continuous-time tools allowed for analyzing complex non-convex problems, which in some cases, cannot be carried out using standard discrete-time tools. This paper aims to provide an overview of these ideas specifically for the broad VI problem class, and the insights originating from applying continuous-time tools for VI problems. We finalize by describing various desiderata of fundamental open questions towards developing optimization methods that work for general VIs and argue that tackling these requires understanding the associated continuous-time dynamics.

Chat is not available.