Poster
in
Workshop: New Frontiers in Learning, Control, and Dynamical Systems
Kernel Mirror Prox and RKHS Gradient Flow for Mixed Functional Nash Equilibrium
Pavel Dvurechenskii · Jia-Jie Zhu
Abstract:
The theoretical analysis of machine learning algorithms, such as deep generative modeling, motivates multiple recent works on the Mixed Nash Equilibrium (MNE) problem.Different from MNE,this paper formulates theMixed Functional Nash Equilibrium (MFNE),which replaces one of the measure optimization problems with optimization over a class of dual functions, e.g., the reproducing kernel Hilbert space (RKHS) in the case of Mixed Kernel Nash Equilibrium (MKNE).We show that our MFNE and MKNE framework form the backbones that govern several existing machine learning algorithms, such as implicit generative models, distributionally robust optimization (DRO), and Wasserstein barycenters.To model the infinite-dimensional continuous-limit optimization dynamics,we propose the Interacting Wasserstein-Kernel Gradient Flow, which includes the RKHS flow that is much less common than the Wasserstein gradient flow but enjoys a much simpler convexity structure.Time-discretizing this gradient flow, we propose a primal-dual kernel mirror prox algorithm, which alternates between a dual step in the RKHS, and a primal step in the space of probability measures.We then provide the first unified convergence analysis of our algorithm for this class of MKNE problems,which establishes a convergence rate of $O(1/N)$ in the deterministic case and $O(1/\sqrt{N})$ in the stochastic case.As a case study, we apply our analysis to DRO, providing the first primal-dual convergence analysis for DRO with probability-metric constraints.
Chat is not available.