Timezone: »
Deep neural networks have shown the ability to extract universal feature representations from data such as images and text that have been useful for a variety of learning tasks. However, the fruits of representation learning have yet to be fully-realized in federated settings. Although data in federated settings is often non-i.i.d. across clients, the success of centralized deep learning suggests that data often shares a global {\em feature representation}, while the statistical heterogeneity across clients or tasks is concentrated in the {\em labels}. Based on this intuition, we propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client. Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation. We prove that this method obtains linear convergence to the ground-truth representation with near-optimal sample complexity in a linear setting, demonstrating that it can efficiently reduce the problem dimension for each client. Further, we provide extensive experimental results demonstrating the improvement of our method over alternative personalized federated learning approaches in heterogeneous settings.
Author Information
Liam Collins (University of Texas at Austin)
Ph. D. student at UT Austin Electrical and Computer Engineering advised by Aryan Mokhtari and Sanjay Shakkottai. Princeton BSE '19.
Hamed Hassani (University of Pennsylvania)

I am an assistant professor in the Department of Electrical and Systems Engineering (as of July 2017). I hold a secondary appointment in the Department of Computer and Information Systems. I am also a faculty affiliate of the Warren Center for Network and Data Sciences. Before joining Penn, I was a research fellow at the Simons Institute, UC Berkeley (program: Foundations of Machine Learning). Prior to that, I was a post-doctoral scholar and lecturer in the Institute for Machine Learning at ETH Zürich. I received my Ph.D. degree in Computer and Communication Sciences from EPFL.
Aryan Mokhtari (UT Austin)
Sanjay Shakkottai (University of Texas at Austin)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Exploiting Shared Representations for Personalized Federated Learning »
Fri. Jul 23rd 04:00 -- 06:00 AM Room
More from the Same Authors
-
2021 : Minimax Optimization: The Case of Convex-Submodular »
Arman Adibi · Aryan Mokhtari · Hamed Hassani -
2021 : Out-of-Distribution Robustness in Deep Learning Compression »
Eric Lei · Hamed Hassani -
2021 : Finite-Sample Analysis of Off-Policy TD-Learning via Generalized Bellman Operators »
Zaiwei Chen · Siva Maguluri · Sanjay Shakkottai · Karthikeyan Shanmugam -
2021 : Under-exploring in Bandits with Confounded Data »
Nihal Sharma · Soumya Basu · Karthikeyan Shanmugam · Sanjay Shakkottai -
2022 : Toward Certified Robustness Against Real-World Distribution Shifts »
Haoze Wu · TERUHIRO TAGOMORI · Alex Robey · Fengjun Yang · Nikolai Matni · George J. Pappas · Hamed Hassani · Corina Pasareanu · Clark Barrett -
2023 Poster: Collaborative Multi-Agent Heterogeneous Multi-Armed Bandits »
Ronshee Chawla · Daniel Vial · Sanjay Shakkottai · R Srikant -
2023 Poster: PAC Generalization via Invariant Representations »
Advait Parulekar · Karthikeyan Shanmugam · Sanjay Shakkottai -
2023 Poster: Demystifying Disagreement-on-the-Line in High Dimensions »
Donghwan Lee · Behrad Moniri · Xinmeng Huang · Edgar Dobriban · Hamed Hassani -
2023 Poster: Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient Methods »
Aleksandr Shevchenko · Kevin Kögler · Hamed Hassani · Marco Mondelli -
2023 Oral: Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient Methods »
Aleksandr Shevchenko · Kevin Kögler · Hamed Hassani · Marco Mondelli -
2022 Poster: MAML and ANIL Provably Learn Representations »
Liam Collins · Aryan Mokhtari · Sewoong Oh · Sanjay Shakkottai -
2022 Poster: Asymptotically-Optimal Gaussian Bandits with Side Observations »
Alexia Atsidakou · Orestis Papadigenopoulos · Constantine Caramanis · Sujay Sanghavi · Sanjay Shakkottai -
2022 Spotlight: Asymptotically-Optimal Gaussian Bandits with Side Observations »
Alexia Atsidakou · Orestis Papadigenopoulos · Constantine Caramanis · Sujay Sanghavi · Sanjay Shakkottai -
2022 Spotlight: MAML and ANIL Provably Learn Representations »
Liam Collins · Aryan Mokhtari · Sewoong Oh · Sanjay Shakkottai -
2022 Poster: Regret Bounds for Stochastic Shortest Path Problems with Linear Function Approximation »
Daniel Vial · Advait Parulekar · Sanjay Shakkottai · R Srikant -
2022 Poster: Probabilistically Robust Learning: Balancing Average- and Worst-case Performance »
Alex Robey · Luiz F. O. Chamon · George J. Pappas · Hamed Hassani -
2022 Spotlight: Probabilistically Robust Learning: Balancing Average- and Worst-case Performance »
Alex Robey · Luiz F. O. Chamon · George J. Pappas · Hamed Hassani -
2022 Spotlight: Regret Bounds for Stochastic Shortest Path Problems with Linear Function Approximation »
Daniel Vial · Advait Parulekar · Sanjay Shakkottai · R Srikant -
2022 Poster: Linear Bandit Algorithms with Sublinear Time Complexity »
Shuo Yang · Tongzheng Ren · Sanjay Shakkottai · Eric Price · Inderjit Dhillon · Sujay Sanghavi -
2022 Poster: Sharpened Quasi-Newton Methods: Faster Superlinear Rate and Larger Local Convergence Neighborhood »
Qiujiang Jin · Alec Koppel · Ketan Rajawat · Aryan Mokhtari -
2022 Spotlight: Linear Bandit Algorithms with Sublinear Time Complexity »
Shuo Yang · Tongzheng Ren · Sanjay Shakkottai · Eric Price · Inderjit Dhillon · Sujay Sanghavi -
2022 Spotlight: Sharpened Quasi-Newton Methods: Faster Superlinear Rate and Larger Local Convergence Neighborhood »
Qiujiang Jin · Alec Koppel · Ketan Rajawat · Aryan Mokhtari -
2021 : Minimax Optimization: The Case of Convex-Submodular »
Hamed Hassani · Aryan Mokhtari · Arman Adibi -
2021 : Contributed Talk #1 »
Eric Lei · Hamed Hassani · Shirin Bidokhti -
2021 Poster: Combinatorial Blocking Bandits with Stochastic Delays »
Alexia Atsidakou · Orestis Papadigenopoulos · Soumya Basu · Constantine Caramanis · Sanjay Shakkottai -
2021 Spotlight: Combinatorial Blocking Bandits with Stochastic Delays »
Alexia Atsidakou · Orestis Papadigenopoulos · Soumya Basu · Constantine Caramanis · Sanjay Shakkottai -
2020 Poster: Quantized Decentralized Stochastic Learning over Directed Graphs »
Hossein Taheri · Aryan Mokhtari · Hamed Hassani · Ramtin Pedarsani -
2020 Tutorial: Submodular Optimization: From Discrete to Continuous and Back »
Hamed Hassani · Amin Karbasi -
2019 Poster: Pareto Optimal Streaming Unsupervised Classification »
Soumya Basu · Steven Gutstein · Brent Lance · Sanjay Shakkottai -
2019 Oral: Pareto Optimal Streaming Unsupervised Classification »
Soumya Basu · Steven Gutstein · Brent Lance · Sanjay Shakkottai -
2019 Poster: Hessian Aided Policy Gradient »
Zebang Shen · Alejandro Ribeiro · Hamed Hassani · Hui Qian · Chao Mi -
2019 Oral: Hessian Aided Policy Gradient »
Zebang Shen · Alejandro Ribeiro · Hamed Hassani · Hui Qian · Chao Mi -
2019 Poster: Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs »
Yogesh Balaji · Hamed Hassani · Rama Chellappa · Soheil Feizi -
2019 Oral: Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs »
Yogesh Balaji · Hamed Hassani · Rama Chellappa · Soheil Feizi -
2018 Poster: Multi-Fidelity Black-Box Optimization with Hierarchical Partitions »
Rajat Sen · kirthevasan kandasamy · Sanjay Shakkottai -
2018 Oral: Multi-Fidelity Black-Box Optimization with Hierarchical Partitions »
Rajat Sen · kirthevasan kandasamy · Sanjay Shakkottai -
2018 Poster: Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings »
Aryan Mokhtari · Hamed Hassani · Amin Karbasi -
2018 Oral: Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings »
Aryan Mokhtari · Hamed Hassani · Amin Karbasi -
2018 Poster: Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication »
Zebang Shen · Aryan Mokhtari · Tengfei Zhou · Peilin Zhao · Hui Qian -
2018 Oral: Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication »
Zebang Shen · Aryan Mokhtari · Tengfei Zhou · Peilin Zhao · Hui Qian -
2017 Poster: Identifying Best Interventions through Online Importance Sampling »
Rajat Sen · Karthikeyan Shanmugam · Alexandros Dimakis · Sanjay Shakkottai -
2017 Talk: Identifying Best Interventions through Online Importance Sampling »
Rajat Sen · Karthikeyan Shanmugam · Alexandros Dimakis · Sanjay Shakkottai