Timezone: »

 
Spotlight
Matching Normalizing Flows and Probability Paths on Manifolds
Heli Ben-Hamu · samuel cohen · Joey Bose · Brandon Amos · Maximilian Nickel · Aditya Grover · Ricky T. Q. Chen · Yaron Lipman

Tue Jul 19 08:40 AM -- 08:45 AM (PDT) @ None

Continuous Normalizing Flows (CNFs) are a class of generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE). We propose to train CNFs on manifolds by minimizing probability path divergence (PPD), a novel family of divergences between the probability density path generated by the CNF and a target probability density path. PPD is formulated using a logarithmic mass conservation formula which is a linearfirst order partial differential equation relating the log target probabilities and the CNF’s defining vector field. PPD has several key benefits over existing methods: it sidesteps the need to solve an ODE per iteration, readily applies to manifold data, scales to high dimensions, and is compatible with a large family of target paths interpolating pure noise and data in finite time. Theoretically, PPD is shown to bound classical probability divergences. Empirically, we show that CNFs learned by minimizing PPD achieve state-of-the-art results in likelihoods and sample quality on existing low-dimensional manifold benchmarks, and is the first example of a generative model to scale to moderately high dimensional manifolds.

Author Information

Heli Ben-Hamu (Weizmann Institute of Science)
samuel cohen (University College London)
Joey Bose (McGill/Mila)

I’m a PhD student at the RLLab at McGill/MILA where I work on Adversarial Machine Learning applied to different data domains, such as images, text, and graphs. Previously, I was a Master’s student at the University of Toronto where I researched crafting Adversarial Attacks on Computer Vision models using GAN’s. I also interned at Borealis AI where I was working on applying adversarial learning principles to learn better embeddings i.e. Word Embeddings for Machine Learning models.

Brandon Amos (Meta AI (FAIR))
Maximilian Nickel (Facebook AI Research)
Aditya Grover (UCLA)
Ricky T. Q. Chen (Facebook AI Research)
Yaron Lipman (Facebook AI Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors