Skip to yearly menu bar Skip to main content


Poster

Variational Wasserstein gradient flow

Jiaojiao Fan · Qinsheng Zhang · Amirhossein Taghvaei · Yongxin Chen

Hall E #323

Keywords: [ DL: Generative Models and Autoencoders ] [ MISC: Scalable Algorithms ] [ OPT: Sampling and Optimization ] [ DL: Algorithms ]


Abstract:

Wasserstein gradient flow has emerged as a promising approach to solve optimization problems over the space of probability distributions. A recent trend is to use the well-known JKO scheme in combination with input convex neural networks to numerically implement the proximal step. The most challenging step, in this setup, is to evaluate functions involving density explicitly, such as entropy, in terms of samples. This paper builds on the recent works with a slight but crucial difference: we propose to utilize a variational formulation of the objective function formulated as maximization over a parametric class of functions. Theoretically, the proposed variational formulation allows the construction of gradient flows directly for empirical distributions with a well-defined and meaningful objective function. Computationally, this approach replaces the computationally expensive step in existing methods, to handle objective functions involving density, with inner loop updates that only require a small batch of samples and scale well with the dimension. The performance and scalability of the proposed method are illustrated with the aid of several numerical experiments involving high-dimensional synthetic and real datasets.

Chat is not available.