Timezone: »
Recent advances in deep representation learning on Riemannian manifolds extend classical deep learning operations to better capture the geometry of the manifold. One possible extension is the Fréchet mean, the generalization of the Euclidean mean; however, it has been difficult to apply because it lacks a closed form with an easily computable derivative. In this paper, we show how to differentiate through the Fréchet mean for arbitrary Riemannian manifolds. Then, focusing on hyperbolic space, we derive explicit gradient expressions and a fast, accurate, and hyperparameter-free Fréchet mean solver. This fully integrates the Fréchet mean into the hyperbolic neural network pipeline. To demonstrate this integration, we present two case studies. First, we apply our Fréchet mean to the existing Hyperbolic Graph Convolutional Network, replacing its projected aggregation to obtain state-of-the-art results on datasets with high hyperbolicity. Second, to demonstrate the Fréchet mean's capacity to generalize Euclidean neural network operations, we develop a hyperbolic batch normalization method that gives an improvement parallel to the one observed in the Euclidean setting.
Author Information
Aaron Lou (Cornell University)
Isay Katsman (Cornell University)
Qingxuan Jiang (Cornell University)
Serge Belongie (Cornell University)
Ser Nam Lim (Facebook)
Christopher De Sa (Cornell)
More from the Same Authors
-
2021 : Equivariant Manifold Flows »
Isay Katsman -
2023 : Metric Compatible Training for Online Backfilling in Large-Scale Retrieval »
Seonguk Seo · Mustafa Gokhan Uzunbas · Bohyung Han · Xuefei Cao · Joena Zhang · Taipeng Tian · Ser Nam Lim -
2023 Poster: Graph Inductive Biases in Transformers without Message Passing »
Liheng Ma · Chen Lin · Derek Lim · Adriana Romero Soriano · Puneet Dokania · Mark Coates · Phil Torr · Ser Nam Lim -
2022 : MCTensor: A High-Precision Deep Learning Library with Multi-Component Floating-Point »
Tao Yu · Wentao Guo · Canal Li · Tiancheng Yuan · Christopher De Sa -
2022 : Riemannian Residual Neural Networks »
Isay Katsman · Eric Chen · Sidhanth Holalkere · Aaron Lou · Ser Nam Lim · Christopher De Sa -
2022 Poster: Low-Precision Stochastic Gradient Langevin Dynamics »
Ruqi Zhang · Andrew Wilson · Christopher De Sa -
2022 Spotlight: Low-Precision Stochastic Gradient Langevin Dynamics »
Ruqi Zhang · Andrew Wilson · Christopher De Sa -
2021 Poster: Variance Reduced Training with Stratified Sampling for Forecasting Models »
Yucheng Lu · Youngsuk Park · Lifan Chen · Yuyang Wang · Christopher De Sa · Dean Foster -
2021 Spotlight: Variance Reduced Training with Stratified Sampling for Forecasting Models »
Yucheng Lu · Youngsuk Park · Lifan Chen · Yuyang Wang · Christopher De Sa · Dean Foster -
2021 Poster: Low-Precision Reinforcement Learning: Running Soft Actor-Critic in Half Precision »
Johan Björck · Xiangyu Chen · Christopher De Sa · Carla Gomes · Kilian Weinberger -
2021 Spotlight: Low-Precision Reinforcement Learning: Running Soft Actor-Critic in Half Precision »
Johan Björck · Xiangyu Chen · Christopher De Sa · Carla Gomes · Kilian Weinberger -
2021 Poster: Optimal Complexity in Decentralized Training »
Yucheng Lu · Christopher De Sa -
2021 Oral: Optimal Complexity in Decentralized Training »
Yucheng Lu · Christopher De Sa -
2020 Poster: Moniqua: Modulo Quantized Communication in Decentralized SGD »
Yucheng Lu · Christopher De Sa -
2019 Poster: Distributed Learning with Sublinear Communication »
Jayadev Acharya · Christopher De Sa · Dylan Foster · Karthik Sridharan -
2019 Oral: Distributed Learning with Sublinear Communication »
Jayadev Acharya · Christopher De Sa · Dylan Foster · Karthik Sridharan -
2019 Poster: SWALP : Stochastic Weight Averaging in Low Precision Training »
Guandao Yang · Tianyi Zhang · Polina Kirichenko · Junwen Bai · Andrew Wilson · Christopher De Sa -
2019 Poster: A Kernel Theory of Modern Data Augmentation »
Tri Dao · Albert Gu · Alexander J Ratner · Virginia Smith · Christopher De Sa · Christopher Re -
2019 Poster: Improving Neural Network Quantization without Retraining using Outlier Channel Splitting »
Ritchie Zhao · Yuwei Hu · Jordan Dotzel · Christopher De Sa · Zhiru Zhang -
2019 Oral: SWALP : Stochastic Weight Averaging in Low Precision Training »
Guandao Yang · Tianyi Zhang · Polina Kirichenko · Junwen Bai · Andrew Wilson · Christopher De Sa -
2019 Oral: Improving Neural Network Quantization without Retraining using Outlier Channel Splitting »
Ritchie Zhao · Yuwei Hu · Jordan Dotzel · Christopher De Sa · Zhiru Zhang -
2019 Oral: A Kernel Theory of Modern Data Augmentation »
Tri Dao · Albert Gu · Alexander J Ratner · Virginia Smith · Christopher De Sa · Christopher Re -
2018 Poster: Minibatch Gibbs Sampling on Large Graphical Models »
Christopher De Sa · Vincent Chen · Wong -
2018 Oral: Minibatch Gibbs Sampling on Large Graphical Models »
Christopher De Sa · Vincent Chen · Wong -
2018 Poster: Representation Tradeoffs for Hyperbolic Embeddings »
Frederic Sala · Christopher De Sa · Albert Gu · Christopher Re -
2018 Oral: Representation Tradeoffs for Hyperbolic Embeddings »
Frederic Sala · Christopher De Sa · Albert Gu · Christopher Re