Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling

Disentangled Representation Learning through Geometry Preservation with the Gromov-Monge Gap

Théo Uscidda · Luca Eyring · Karsten Roth · Fabian Theis · Zeynep Akata · Marco Cuturi

Keywords: [ Representation Learning ] [ Optimal Transport ] [ disentanglement ]


Abstract:

Learning disentangled representations in an unsupervised manner is a fundamental challenge with significant promise for improving generalization, interpretability, and fairness. While impossible in general, recent work has shown that unsupervised disentanglement is provably achievable under assumptions of certain geometrical constraints such as local isometry. Leveraging these insights, we propose a novel perspective on disentangled representation learning through the lens of quadratic optimal transport (OT). We formulate the OT problem in the Gromov-Monge setting to make the alignment of distributions in different spaces possible while preserving their intrinsic geometry. For this, we propose the Gromov-Monge-Gap (GMG), which regularizes a map to learn the most geometry-preserving mapping satisfying a fixed transportation constraint. We demonstrate its effectiveness for disentanglement on four standard benchmarks. Moreover, we show that geometry preservation can even encourage unsupervised disentanglement without the standard reconstruction objective - making the underlying model decoder-free, and promising a more practically viable and scalable perspective on disentanglement.

Chat is not available.