Timezone: »

 
Poster
Latent Variable Modelling with Hyperbolic Normalizing Flows
Joey Bose · Ariella Smofsky · Renjie Liao · Prakash Panangaden · Will Hamilton

Thu Jul 16 06:00 AM -- 06:45 AM & Thu Jul 16 05:00 PM -- 05:45 PM (PDT) @ None #None
The choice of approximate posterior distributions plays a central role in stochastic variational inference (SVI). One effective solution is the use of normalizing flows \cut{defined on Euclidean spaces} to construct flexible posterior distributions. However, one key limitation of existing normalizing flows is that they are restricted to the Euclidean space and are ill-equipped to model data with an underlying hierarchical structure. To address this fundamental limitation, we present the first extension of normalizing flows to hyperbolic spaces. We first elevate normalizing flows to hyperbolic spaces using coupling transforms defined on the tangent bundle, termed Tangent Coupling ($\mathcal{TC}$). We further introduce Wrapped Hyperboloid Coupling ($\mathcal{W}\mathbb{H}C$), a fully invertible and learnable transformation that explicitly utilizes the geometric structure of hyperbolic spaces, allowing for expressive posteriors while being efficient to sample from. We demonstrate the efficacy of our novel normalizing flow over hyperbolic VAEs and Euclidean normalizing flows. Our approach achieves improved performance on density estimation, as well as reconstruction of real-world graph data, which exhibit a hierarchical structure. Finally, we show that our approach can be used to power a generative model over hierarchical data using hyperbolic latent variables.

Author Information

Joey Bose (McGill/Mila)

I’m a PhD student at the RLLab at McGill/MILA where I work on Adversarial Machine Learning applied to different data domains, such as images, text, and graphs. Previously, I was a Master’s student at the University of Toronto where I researched crafting Adversarial Attacks on Computer Vision models using GAN’s. I also interned at Borealis AI where I was working on applying adversarial learning principles to learn better embeddings i.e. Word Embeddings for Machine Learning models.

Ariella Smofsky (McGill University and Mila)

MSc student at Mila. Interested broadly in the intersection of formal methods and Machine Learning. In particular I am interested in graph generation, code representation learning, active learning, latent variable modeling, and automata theory.

Renjie Liao (University of Toronto)
Prakash Panangaden (McGill University and Mila)

MSc IIT Kanpur (Physics), MS University of Chicago (Physics), PhD University of Wisconsin-Milwaukee (Physics), MS University of Utah (Computer Science), Asst. Prof. Computer Science, Cornell University; Professor McGill University. Fellow of the Royal Society of Canada.

Will Hamilton (McGill University and Mila)

More from the Same Authors