Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Topology, Algebra, and Geometry in Machine Learning

Neural Implicit Manifold Learning for Topology-Aware Generative Modelling

Brendan Ross · Gabriel Loaiza-Ganem · Anthony Caterini · Jesse Cresswell


Abstract:

Natural data observed in R^n is often constrained to an m-dimensional manifold M, where m < n. Current generative models represent this manifold by mapping an m-dimensional latent variable through a neural network f : R^m → R^n. Such procedures, which we call pushforward models, incur a straightforward limitation:manifolds cannot in general be represented with a single parameterization, meaning that attempts to do so will incur either computational instability or the inability to learn probability densities within the manifold. To remedy this problem, we propose to model M as a "neural implicit manifold": the set of zeros of a neural network. To learn the data distribution within M, we introduce the "constrained energy-based model," which uses a constrained variant of Langevin dynamics to train and sample within the learned manifold. The resulting model can be manipulated with an "arithmetic of manifolds" which allows practitioners to take unions and intersections of model manifolds. In experiments on synthetic and natural data, we show that constrained EBMs can learn manifold-supported distributions with complex topologies more accurately than pushforward models.

Chat is not available.