Timezone: »
The hyperbolic space is widely used for representing hierarchical datasets due to its ability to embed trees with small distortion. However, this property comes at a price of numerical instability such that training hyperbolic learning models will sometimes lead to catastrophic NaN problems, encountering unrepresentable values in floating point arithmetic. In this work, we analyze the limitations of two popular models for the hyperbolic space, namely, the Poincaré ball and the Lorentz model. We find that, under the 64-bit arithmetic system, the Poincaré ball has a relatively larger capacity than the Lorentz model for correctly representing points. However, the Lorentz model is superior to the Poincaré ball from the perspective of optimization, which we theoretically validate. To address these limitations, we identify one Euclidean parametrization of the hyperbolic space which can alleviate these issues. We further extend this Euclidean parametrization to hyperbolic hyperplanes and demonstrate its effectiveness in improving the performance of hyperbolic SVM.
Author Information
Gal Mishne (UC San Diego)
Zhengchao Wan (UCSD)
Yusu Wang (UC San Diego)
Sheng Yang (Harvard University, Harvard University)
More from the Same Authors
-
2023 : Product Manifold Learning with Independent Coordinate Selection »
Jesse He · Tristan Brugère · Gal Mishne -
2023 : The Weisfeiler-Lehman Distance: Reinterpretation and Connection with GNNs »
Samantha Chen · Sunhyuk Lim · Facundo Memoli · Zhengchao Wan · Yusu Wang -
2023 : Neural Approaches for Geometric Problems »
Yusu Wang -
2023 Poster: Hyperbolic Diffusion Embedding and Distance for Hierarchical Representation Learning »
Ya-Wei Eileen Lin · Ronald Coifman · Gal Mishne · Ronen Talmon -
2023 Poster: On the Connection Between MPNN and Graph Transformer »
Chen Cai · Truong Son Hy · Rose Yu · Yusu Wang -
2023 Poster: The Persistent Laplacian for Data Science: Evaluating Higher-Order Persistent Spectral Representations of Data »
Thomas Davies · Zhengchao Wan · Ruben Sanchez-Garcia -
2023 Poster: Understanding Oversquashing in GNNs through the Lens of Effective Resistance »
Mitchell Black · Zhengchao Wan · Amir Nayyeri · Yusu Wang -
2022 : Evaluating Disentanglement in Generative Models Without Knowledge of Latent Factors »
Chester Holtz · Gal Mishne · Alexander Cloninger -
2022 Poster: Convergence of Invariant Graph Networks »
Chen Cai · Yusu Wang -
2022 Spotlight: Convergence of Invariant Graph Networks »
Chen Cai · Yusu Wang -
2022 Poster: Weisfeiler-Lehman Meets Gromov-Wasserstein »
Samantha Chen · Sunhyuk Lim · Facundo Memoli · Zhengchao Wan · Yusu Wang -
2022 Poster: Generative Coarse-Graining of Molecular Conformations »
Wujie Wang · Minkai Xu · Chen Cai · Benjamin Kurt Miller · Tess Smidt · Yusu Wang · Jian Tang · Rafael Gomez-Bombarelli -
2022 Spotlight: Generative Coarse-Graining of Molecular Conformations »
Wujie Wang · Minkai Xu · Chen Cai · Benjamin Kurt Miller · Tess Smidt · Yusu Wang · Jian Tang · Rafael Gomez-Bombarelli -
2022 Spotlight: Weisfeiler-Lehman Meets Gromov-Wasserstein »
Samantha Chen · Sunhyuk Lim · Facundo Memoli · Zhengchao Wan · Yusu Wang