Timezone: »
Continuous Normalizing Flows (CNFs) are a class of generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE). We propose to train CNFs on manifolds by minimizing probability path divergence (PPD), a novel family of divergences between the probability density path generated by the CNF and a target probability density path. PPD is formulated using a logarithmic mass conservation formula which is a linearfirst order partial differential equation relating the log target probabilities and the CNF’s defining vector field. PPD has several key benefits over existing methods: it sidesteps the need to solve an ODE per iteration, readily applies to manifold data, scales to high dimensions, and is compatible with a large family of target paths interpolating pure noise and data in finite time. Theoretically, PPD is shown to bound classical probability divergences. Empirically, we show that CNFs learned by minimizing PPD achieve state-of-the-art results in likelihoods and sample quality on existing low-dimensional manifold benchmarks, and is the first example of a generative model to scale to moderately high dimensional manifolds.
Author Information
Heli Ben-Hamu (Weizmann Institute of Science)
samuel cohen (University College London)
Joey Bose (McGill/Mila)
I’m a PhD student at the RLLab at McGill/MILA where I work on Adversarial Machine Learning applied to different data domains, such as images, text, and graphs. Previously, I was a Master’s student at the University of Toronto where I researched crafting Adversarial Attacks on Computer Vision models using GAN’s. I also interned at Borealis AI where I was working on applying adversarial learning principles to learn better embeddings i.e. Word Embeddings for Machine Learning models.
Brandon Amos (Meta AI (FAIR))
Maximilian Nickel (Meta AI)
Aditya Grover (UCLA)
Ricky T. Q. Chen (Facebook AI Research)
Yaron Lipman (Facebook AI Research)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Matching Normalizing Flows and Probability Paths on Manifolds »
Tue. Jul 19th 03:40 -- 03:45 PM Room Room 310
More from the Same Authors
-
2021 : Neural Fixed-Point Acceleration for Convex Optimization »
Shobha Venkataraman · Brandon Amos -
2021 : Learning from an Exploring Demonstrator: Optimal Reward Estimation for Bandits »
Wenshuo Guo · Kumar Agrawal · Aditya Grover · Vidya Muthukumar · Ashwin Pananjady -
2021 : Decision Transformer: Reinforcement Learning via Sequence Modeling »
Lili Chen · Kevin Lu · Aravind Rajeswaran · Kimin Lee · Aditya Grover · Michael Laskin · Pieter Abbeel · Aravind Srinivas · Igor Mordatch -
2021 : Decision Transformer: Reinforcement Learning via Sequence Modeling »
Lili Chen · Kevin Lu · Aravind Rajeswaran · Kimin Lee · Aditya Grover · Michael Laskin · Pieter Abbeel · Aravind Srinivas · Igor Mordatch -
2022 : Learning to Discretize for Continuous-time Sequence Compression »
Ricky T. Q. Chen · Maximilian Nickel · Matthew Le · Matthew Muckley · Karen Ullrich -
2022 : P24: Unifying Generative Models with GFlowNets »
Dinghuai Zhang · Ricky T. Q. Chen -
2023 : Neural Optimal Transport with Lagrangian Costs »
Aram-Alexandre Pooladian · Carles Domingo i Enrich · Ricky T. Q. Chen · Brandon Amos -
2023 : Koopman Constrained Policy Optimization: A Koopman operator theoretic method for differentiable optimal control in robotics »
Matthew Retchin · Brandon Amos · Steven Brunton · Shuran Song -
2023 : TaskMet: Task-Driven Metric Learning for Model Learning »
Dishank Bansal · Ricky T. Q. Chen · Mustafa Mukadam · Brandon Amos -
2023 : Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information »
Arman Zharmagambetov · Brandon Amos · Aaron Ferber · Taoan Huang · Bistra Dilkina · Yuandong Tian -
2023 : Landscape Surrogate: Learning Decision Losses for Mathematical Optimization Under Partial Information »
Arman Zharmagambetov · Brandon Amos · Aaron Ferber · Taoan Huang · Bistra Dilkina · Yuandong Tian -
2023 : On optimal control and machine learning »
Brandon Amos -
2023 Oral: Equivariant Polynomials for Graph Neural Networks »
Omri Puny · Derek Lim · Bobak T Kiani · Haggai Maron · Yaron Lipman -
2023 Poster: Equivariant Polynomials for Graph Neural Networks »
Omri Puny · Derek Lim · Bobak T Kiani · Haggai Maron · Yaron Lipman -
2023 Poster: Meta Optimal Transport »
Brandon Amos · Giulia Luise · samuel cohen · Ievgen Redko -
2023 Poster: Multisample Flow Matching: Straightening Flows with Minibatch Couplings »
Aram-Alexandre Pooladian · Heli Ben-Hamu · Carles Domingo i Enrich · Brandon Amos · Yaron Lipman · Ricky T. Q. Chen -
2023 Poster: On Kinetic Optimal Probability Paths for Generative Models »
Neta Shaul · Ricky T. Q. Chen · Maximilian Nickel · Matthew Le · Yaron Lipman -
2023 Poster: MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation »
Omer Bar-Tal · Lior Yariv · Yaron Lipman · Tali Dekel -
2023 Poster: Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories »
Qinqing Zheng · Mikael Henaff · Brandon Amos · Aditya Grover -
2022 : Differentiable optimization for control and reinforcement learning »
Brandon Amos -
2022 Poster: Online Decision Transformer »
Qinqing Zheng · Amy Zhang · Aditya Grover -
2022 Oral: Online Decision Transformer »
Qinqing Zheng · Amy Zhang · Aditya Grover -
2022 Poster: Transformer Neural Processes: Uncertainty-Aware Meta Learning Via Sequence Modeling »
Tung Nguyen · Aditya Grover -
2022 Spotlight: Transformer Neural Processes: Uncertainty-Aware Meta Learning Via Sequence Modeling »
Tung Nguyen · Aditya Grover -
2021 : Invited Talk 6 (Maximilian Nickel): Modeling Spatio-Temporal Events via Normalizing Flows »
Maximilian Nickel -
2021 Poster: CURI: A Benchmark for Productive Concept Learning Under Uncertainty »
Shanmukha Ramakrishna Vedantam · Arthur Szlam · Maximilian Nickel · Ari Morcos · Brenden Lake -
2021 Spotlight: CURI: A Benchmark for Productive Concept Learning Under Uncertainty »
Shanmukha Ramakrishna Vedantam · Arthur Szlam · Maximilian Nickel · Ari Morcos · Brenden Lake -
2021 Poster: Phase Transitions, Distance Functions, and Implicit Neural Representations »
Yaron Lipman -
2021 Spotlight: Phase Transitions, Distance Functions, and Implicit Neural Representations »
Yaron Lipman -
2021 Poster: CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints »
Anselm Paulus · Michal Rolinek · Vit Musil · Brandon Amos · Georg Martius -
2021 Spotlight: CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints »
Anselm Paulus · Michal Rolinek · Vit Musil · Brandon Amos · Georg Martius -
2021 Poster: Riemannian Convex Potential Maps »
samuel cohen · Brandon Amos · Yaron Lipman -
2021 Spotlight: Riemannian Convex Potential Maps »
samuel cohen · Brandon Amos · Yaron Lipman -
2020 Poster: Latent Variable Modelling with Hyperbolic Normalizing Flows »
Joey Bose · Ariella Smofsky · Renjie Liao · Prakash Panangaden · Will Hamilton -
2020 Poster: Healing Products of Gaussian Process Experts »
samuel cohen · Rendani Mbuvha · Tshilidzi Marwala · Marc Deisenroth -
2020 Poster: Implicit Geometric Regularization for Learning Shapes »
Amos Gropp · Lior Yariv · Niv Haim · Matan Atzmon · Yaron Lipman -
2020 Poster: The Differentiable Cross-Entropy Method »
Brandon Amos · Denis Yarats -
2020 Poster: Fair Generative Modeling via Weak Supervision »
Kristy Choi · Aditya Grover · Trisha Singh · Rui Shu · Stefano Ermon -
2019 : Yaron Lipman, Weizmann Institute of Science »
Yaron Lipman -
2019 Poster: Graphite: Iterative Generative Modeling of Graphs »
Aditya Grover · Aaron Zweig · Stefano Ermon -
2019 Oral: Graphite: Iterative Generative Modeling of Graphs »
Aditya Grover · Aaron Zweig · Stefano Ermon -
2019 Poster: Compositional Fairness Constraints for Graph Embeddings »
Avishek Bose · William Hamilton -
2019 Oral: Compositional Fairness Constraints for Graph Embeddings »
Avishek Bose · William Hamilton -
2019 Poster: On the Universality of Invariant Networks »
Haggai Maron · Ethan Fetaya · Nimrod Segol · Yaron Lipman -
2019 Poster: Neural Joint Source-Channel Coding »
Kristy Choi · Kedar Tatwawadi · Aditya Grover · Tsachy Weissman · Stefano Ermon -
2019 Oral: Neural Joint Source-Channel Coding »
Kristy Choi · Kedar Tatwawadi · Aditya Grover · Tsachy Weissman · Stefano Ermon -
2019 Oral: On the Universality of Invariant Networks »
Haggai Maron · Ethan Fetaya · Nimrod Segol · Yaron Lipman -
2018 Poster: Modeling Sparse Deviations for Compressed Sensing using Generative Models »
Manik Dhar · Aditya Grover · Stefano Ermon -
2018 Oral: Modeling Sparse Deviations for Compressed Sensing using Generative Models »
Manik Dhar · Aditya Grover · Stefano Ermon -
2018 Poster: Learning Policy Representations in Multiagent Systems »
Aditya Grover · Maruan Al-Shedivat · Jayesh K. Gupta · Yura Burda · Harrison Edwards -
2018 Oral: Learning Policy Representations in Multiagent Systems »
Aditya Grover · Maruan Al-Shedivat · Jayesh K. Gupta · Yura Burda · Harrison Edwards -
2018 Poster: Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic Geometry »
Maximilian Nickel · Douwe Kiela -
2018 Oral: Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic Geometry »
Maximilian Nickel · Douwe Kiela