Timezone: »
We revisit the classical problem of deriving convergence rates for the maximum likelihood estimator (MLE) in finite mixture models. The Wasserstein distance has become a standard loss function for the analysis of parameter estimation in these models, due in part to its ability to circumvent label switching and to accurately characterize the behaviour of fitted mixture components with vanishing weights. However, the Wasserstein distance is only able to capture the worst-case convergence rate among the remaining fitted mixture components. We demonstrate that when the log-likelihood function is penalized to discourage vanishing mixing weights, stronger loss functions can be derived to resolve this shortcoming of the Wasserstein distance. These new loss functions accurately capture the heterogeneity in convergence rates of fitted mixture components, and we use them to sharpen existing pointwise and uniform convergence rates in various classes of mixture models. In particular, these results imply that a subset of the components of the penalized MLE typically converge significantly faster than could have been anticipated from past work. We further show that some of these conclusions extend to the traditional MLE. Our theoretical findings are supported by a simulation study to illustrate these improved convergence rates.
Author Information
Tudor Manole (Carnegie Mellon University)
Nhat Ho (University of Texas at Austin)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Refined Convergence Rates for Maximum Likelihood Estimation under Finite Mixture Models »
Tue. Jul 19th through Wed the 20th Room Hall E #1218
More from the Same Authors
-
2023 : Fast Approximation of the Generalized Sliced-Wasserstein Distance »
Dung Le · Huy Nguyen · Khai Nguyen · Nhat Ho -
2023 Poster: Revisiting Over-smoothing and Over-squashing Using Ollivier-Ricci Curvature »
Khang Nguyen · Nong Hieu · Vinh NGUYEN · Nhat Ho · Stanley Osher · TAN NGUYEN -
2023 Poster: On Excess Mass Behavior in Gaussian Mixture Models with Orlicz-Wasserstein Distances »
Aritra Guha · Nhat Ho · XuanLong Nguyen -
2023 Poster: Self-Attention Amortized Distributional Projection Optimization for Sliced Wasserstein Point-Cloud Reconstruction »
Khai Nguyen · Dang Nguyen · Nhat Ho -
2023 Poster: Neural Collapse in Deep Linear Networks: From Balanced to Imbalanced Data »
Hien Dang · Tho Tran Huu · Stanley Osher · Hung Tran-The · Nhat Ho · TAN NGUYEN -
2022 Poster: Entropic Gromov-Wasserstein between Gaussian Distributions »
Khang Le · Dung Le · Huy Nguyen · · Tung Pham · Nhat Ho -
2022 Poster: Improving Transformers with Probabilistic Attention Keys »
Tam Nguyen · Tan Nguyen · Dung Le · Duy Khuong Nguyen · Viet-Anh Tran · Richard Baraniuk · Nhat Ho · Stanley Osher -
2022 Spotlight: Improving Transformers with Probabilistic Attention Keys »
Tam Nguyen · Tan Nguyen · Dung Le · Duy Khuong Nguyen · Viet-Anh Tran · Richard Baraniuk · Nhat Ho · Stanley Osher -
2022 Spotlight: Entropic Gromov-Wasserstein between Gaussian Distributions »
Khang Le · Dung Le · Huy Nguyen · · Tung Pham · Nhat Ho -
2022 Poster: On Transportation of Mini-batches: A Hierarchical Approach »
Khai Nguyen · Dang Nguyen · Quoc Nguyen · Tung Pham · Hung Bui · Dinh Phung · Trung Le · Nhat Ho -
2022 Poster: Architecture Agnostic Federated Learning for Neural Networks »
Disha Makhija · Xing Han · Nhat Ho · Joydeep Ghosh -
2022 Poster: Improving Mini-batch Optimal Transport via Partial Transportation »
Khai Nguyen · Dang Nguyen · The-Anh Vu-Le · Tung Pham · Nhat Ho -
2022 Spotlight: Architecture Agnostic Federated Learning for Neural Networks »
Disha Makhija · Xing Han · Nhat Ho · Joydeep Ghosh -
2022 Spotlight: Improving Mini-batch Optimal Transport via Partial Transportation »
Khai Nguyen · Dang Nguyen · The-Anh Vu-Le · Tung Pham · Nhat Ho -
2022 Spotlight: On Transportation of Mini-batches: A Hierarchical Approach »
Khai Nguyen · Dang Nguyen · Quoc Nguyen · Tung Pham · Hung Bui · Dinh Phung · Trung Le · Nhat Ho -
2021 Poster: LAMDA: Label Matching Deep Domain Adaptation »
Trung Le · Tuan Nguyen · Nhat Ho · Hung Bui · Dinh Phung -
2021 Spotlight: LAMDA: Label Matching Deep Domain Adaptation »
Trung Le · Tuan Nguyen · Nhat Ho · Hung Bui · Dinh Phung