Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Geometry-grounded Representation Learning and Generative Modeling

Geometry-Aware Autoencoders for Metric Learning and Generative Modeling on Data Manifolds

Xingzhi Sun · Danqi Liao · Kincaid Macdonald · Yanlei Zhang · Guillaume Huguet · Guy Wolf · Ian Adelstein · Tim G. J. Rudner · Smita Krishnaswamy

Keywords: [ Riemannian Generative Modeling ] [ Manifold Learning ] [ Single-cell Dynamics ]


Abstract:

Non-linear dimensionality reduction methods have proven successful at learning low-dimensional representations of high-dimensional point clouds on or near data manifolds. However, existing methods are not easily extensible— for large datasets, it is prohibitively expensive to add new points to these embeddings. As a result, it is very difficult to use existing embeddings generatively, to sample new points on and along these manifolds. In this paper, we propose GAGA (geometry-aware generative autoencoders) a framework which merges the power of generative deep learning with non-linear manifold learning by: 1) learning generalizable geometry-aware neural network embeddings based on non-linear dimensionality reduction methods like PHATE and diffusion maps, 2) deriving a non-euclidean pullback metric on the data space to generate points faithfully along data manifold geodesics, and 3) learning a flow on the manifold that allows us to transport populations. We provide illustration on easily-interpretable synthetic datasets and showcase results on simulated and real single cell datasets. We show that the geodesic-based generation can be especially important for scientific datasets where the manifold represents a state space and geodesics can represent dynamics of entities over this space.

Chat is not available.