Timezone: »
Generalized sliced-Wasserstein distance is a variant of sliced-Wasserstein distance that exploits the power of non-linear projection through a given defining function to better capture the complex structures of probability distributions. Similar to the sliced-Wasserstein distance, generalized sliced-Wasserstein is defined as an expectation over random projections which can be approximated by the Monte Carlo method. However, the complexity of that approximation can be expensive in high-dimensional settings. To that end, we propose to form deterministic and fast approximations of the generalized sliced-Wasserstein distance by using the concentration of random projections when the defining functions are polynomial function and neural network type function. Our approximations hinge upon an important result that one-dimensional projections of a high-dimensional random vector are approximately Gaussian.
Author Information
Dung Le (École Polytechnique)
Huy Nguyen (The University of Texas at Austin)
Khai Nguyen (University of Texas at Austin)
Nhat Ho (University of Texas at Austin)
More from the Same Authors
-
2023 Poster: Revisiting Over-smoothing and Over-squashing Using Ollivier-Ricci Curvature »
Khang Nguyen · Nong Hieu · Vinh NGUYEN · Nhat Ho · Stanley Osher · TAN NGUYEN -
2023 Poster: On Excess Mass Behavior in Gaussian Mixture Models with Orlicz-Wasserstein Distances »
Aritra Guha · Nhat Ho · XuanLong Nguyen -
2023 Poster: Self-Attention Amortized Distributional Projection Optimization for Sliced Wasserstein Point-Cloud Reconstruction »
Khai Nguyen · Dang Nguyen · Nhat Ho -
2023 Poster: Neural Collapse in Deep Linear Networks: From Balanced to Imbalanced Data »
Hien Dang · Tho Tran Huu · Stanley Osher · Hung Tran-The · Nhat Ho · TAN NGUYEN -
2022 Poster: Entropic Gromov-Wasserstein between Gaussian Distributions »
Khang Le · Dung Le · Huy Nguyen · · Tung Pham · Nhat Ho -
2022 Poster: Improving Transformers with Probabilistic Attention Keys »
Tam Nguyen · Tan Nguyen · Dung Le · Duy Khuong Nguyen · Viet-Anh Tran · Richard Baraniuk · Nhat Ho · Stanley Osher -
2022 Spotlight: Improving Transformers with Probabilistic Attention Keys »
Tam Nguyen · Tan Nguyen · Dung Le · Duy Khuong Nguyen · Viet-Anh Tran · Richard Baraniuk · Nhat Ho · Stanley Osher -
2022 Spotlight: Entropic Gromov-Wasserstein between Gaussian Distributions »
Khang Le · Dung Le · Huy Nguyen · · Tung Pham · Nhat Ho -
2022 Poster: On Transportation of Mini-batches: A Hierarchical Approach »
Khai Nguyen · Dang Nguyen · Quoc Nguyen · Tung Pham · Hung Bui · Dinh Phung · Trung Le · Nhat Ho -
2022 Poster: Architecture Agnostic Federated Learning for Neural Networks »
Disha Makhija · Xing Han · Nhat Ho · Joydeep Ghosh -
2022 Poster: Improving Mini-batch Optimal Transport via Partial Transportation »
Khai Nguyen · Dang Nguyen · The-Anh Vu-Le · Tung Pham · Nhat Ho -
2022 Spotlight: Architecture Agnostic Federated Learning for Neural Networks »
Disha Makhija · Xing Han · Nhat Ho · Joydeep Ghosh -
2022 Spotlight: Improving Mini-batch Optimal Transport via Partial Transportation »
Khai Nguyen · Dang Nguyen · The-Anh Vu-Le · Tung Pham · Nhat Ho -
2022 Spotlight: On Transportation of Mini-batches: A Hierarchical Approach »
Khai Nguyen · Dang Nguyen · Quoc Nguyen · Tung Pham · Hung Bui · Dinh Phung · Trung Le · Nhat Ho -
2022 Poster: Refined Convergence Rates for Maximum Likelihood Estimation under Finite Mixture Models »
Tudor Manole · Nhat Ho -
2022 Oral: Refined Convergence Rates for Maximum Likelihood Estimation under Finite Mixture Models »
Tudor Manole · Nhat Ho -
2021 Poster: LAMDA: Label Matching Deep Domain Adaptation »
Trung Le · Tuan Nguyen · Nhat Ho · Hung Bui · Dinh Phung -
2021 Spotlight: LAMDA: Label Matching Deep Domain Adaptation »
Trung Le · Tuan Nguyen · Nhat Ho · Hung Bui · Dinh Phung