Timezone: »
Poster
Sketching for First Order Method: Efficient Algorithm for Low-Bandwidth Channel and Vulnerability
Zhao Song · Yitan Wang · Zheng Yu · Lichen Zhang
Sketching is one of the most fundamental tools in large-scale machine learning. It enables runtime and memory saving via randomly compressing the original large problem into lower dimensions. In this paper, we propose a novel sketching scheme for the first order method in large-scale distributed learning setting, such that the communication costs between distributed agents are saved while the convergence of the algorithms is still guaranteed. Given gradient information in a high dimension $d$, the agent passes the compressed information processed by a sketching matrix $R\in \mathbb{R}^{s\times d}$ with $s\ll d$, and the receiver de-compressed via the de-sketching matrix $R^\top$ to ``recover'' the information in original dimension. Using such a framework, we develop algorithms for federated learning with lower communication costs. However, such random sketching does not protect the privacy of local data directly. We show that the gradient leakage problem still exists after applying the sketching technique by presenting a specific gradient attack method. As a remedy, we prove rigorously that the algorithm will be differentially private by adding additional random noises in gradient information, which results in a both communication-efficient and differentially private first order approach for federated learning tasks. Our sketching scheme can be further generalized to other learning settings and might be of independent interest itself.
Author Information
Zhao Song (Adobe Research)
Yitan Wang (Yale University)
Zheng Yu (Princeton University)
Lichen Zhang (MIT)
More from the Same Authors
-
2023 : H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models »
Zhenyu Zhang · Ying Sheng · Tianyi Zhou · Tianlong Chen · Lianmin Zheng · Ruisi Cai · Zhao Song · Yuandong Tian · Christopher Re · Clark Barrett · Zhangyang “Atlas” Wang · Beidi Chen -
2023 Oral: Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time »
Zichang Liu · Jue Wang · Tri Dao · Tianyi Zhou · Binhang Yuan · Zhao Song · Anshumali Shrivastava · Ce Zhang · Yuandong Tian · Christopher Re · Beidi Chen -
2023 Poster: Federated Adversarial Learning: A Framework with Convergence Analysis »
Xiaoxiao Li · Zhao Song · Jiaming Yang -
2023 Poster: A Nearly-Optimal Bound for Fast Regression with $\ell_\infty$ Guarantee »
Zhao Song · Mingquan Ye · Junze Yin · Lichen Zhang -
2023 Poster: Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time »
Zichang Liu · Jue Wang · Tri Dao · Tianyi Zhou · Binhang Yuan · Zhao Song · Anshumali Shrivastava · Ce Zhang · Yuandong Tian · Christopher Re · Beidi Chen -
2023 Poster: Sketching Meets Differential Privacy: Fast Algorithm for Dynamic Kronecker Projection Maintenance »
Zhao Song · Xin Yang · Yuanyuan Yang · Lichen Zhang -
2021 Poster: Fast Sketching of Polynomial Kernels of Polynomial Degree »
Zhao Song · David Woodruff · Zheng Yu · Lichen Zhang -
2021 Spotlight: Fast Sketching of Polynomial Kernels of Polynomial Degree »
Zhao Song · David Woodruff · Zheng Yu · Lichen Zhang -
2021 Poster: FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Analysis »
Baihe Huang · Xiaoxiao Li · Zhao Song · Xin Yang -
2021 Spotlight: FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Analysis »
Baihe Huang · Xiaoxiao Li · Zhao Song · Xin Yang -
2021 Poster: Oblivious Sketching-based Central Path Method for Linear Programming »
Zhao Song · Zheng Yu -
2021 Spotlight: Oblivious Sketching-based Central Path Method for Linear Programming »
Zhao Song · Zheng Yu -
2020 Poster: Meta-learning for Mixed Linear Regression »
Weihao Kong · Raghav Somani · Zhao Song · Sham Kakade · Sewoong Oh