Timezone: »
Multi-modal distributions are commonly used to model clustered data in statistical learning tasks. In this paper, we consider the Mixed Linear Regression (MLR) problem. We propose an optimal transport-based framework for MLR problems, Wasserstein Mixed Linear Regression (WMLR), which minimizes the Wasserstein distance between the learned and target mixture regression models. Through a model-based duality analysis, WMLR reduces the underlying MLR task to a nonconvex-concave minimax optimization problem, which can be provably solved to find a minimax stationary point by the Gradient Descent Ascent (GDA) algorithm. In the special case of mixtures of two linear regression models, we show that WMLR enjoys global convergence and generalization guarantees. We prove that WMLR’s sample complexity grows linearly with the dimension of data. Finally, we discuss the application of WMLR to the federated learning task where the training samples are collected by multiple agents in a network. Unlike the Expectation-Maximization algorithm, WMLR directly extends to the distributed, federated learning setting. We support our theoretical results through several numerical experiments, which highlight our framework’s ability to handle the federated learning setting with mixture models.
Author Information
Theo Diamandis (MIT)
Yonina Eldar
Alireza Fallah (MIT)
Farzan Farnia (MIT)
Asuman Ozdaglar (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Oral: A Wasserstein Minimax Framework for Mixed Linear Regression »
Wed. Jul 21st 01:00 -- 01:20 AM Room
More from the Same Authors
-
2021 : Decentralized Q-Learning in Zero-sum Markov Games »
Kaiqing Zhang · David Leslie · Tamer Basar · Asuman Ozdaglar -
2022 : What is a Good Metric to Study Generalization of Minimax Learners? »
Asuman Ozdaglar · Sarath Pattathil · Jiawei Zhang · Kaiqing Zhang -
2021 Poster: Private Adaptive Gradient Methods for Convex Optimization »
Hilal Asi · John Duchi · Alireza Fallah · Omid Javidbakht · Kunal Talwar -
2021 Spotlight: Private Adaptive Gradient Methods for Convex Optimization »
Hilal Asi · John Duchi · Alireza Fallah · Omid Javidbakht · Kunal Talwar -
2021 Poster: Train simultaneously, generalize better: Stability of gradient-based minimax learners »
Farzan Farnia · Asuman Ozdaglar -
2021 Spotlight: Train simultaneously, generalize better: Stability of gradient-based minimax learners »
Farzan Farnia · Asuman Ozdaglar -
2020 Poster: Do GANs always have Nash equilibria? »
Farzan Farnia · Asuman Ozdaglar