Timezone: »
Federated learning (FL) is a common and practical framework for learning a machine model in a decentralized fashion. A primary motivation behind this decentralized approach is data privacy, ensuring that the learner never sees the data of each local source itself. Federated learning then comes with two majors challenges: one is handling potentially complex model updates between a server and a large number of data sources; the other is that de-centralization may, in fact, be insufficient for privacy, as the local updates themselves can reveal information about the sources' data. To address these issues, we consider an approach to federated learning that combines quantization and differential privacy. Absent privacy, Federated Learning often relies on quantization to reduce communication complexity. We build upon this approach and develop a new algorithm called the \textbf{R}andomized \textbf{Q}uantization \textbf{M}echanism (RQM), which obtains privacy through a two-levels of randomization. More precisely, we randomly sub-sample feasible quantization levels, then employ a randomized rounding procedure using these sub-sampled discrete levels. We are able to establish that our results preserve ``Renyi differential privacy'' (Renyi DP). We empirically study the performance of our algorithm and demonstrate that compared to previous work it yields improved privacy-accuracy trade-offs for DP federated learning. To the best of our knowledge, this is the first study that solely relies on randomized quantization without incorporating explicit discrete noise to achieve Renyi DP guarantees in Federated Learning systems.
Author Information
Yeojoon Youn (Georgia Institute of Technology)
Zihao Hu (Georgia Institute of Technology)
Juba Ziani (Georgia Tech)
Jacob Abernethy (Georgia Institute of Technology)
More from the Same Authors
-
2020 : Contributed Talk: Bridging Truthfulness and Corruption-Robustness in Multi-Armed Bandit Mechanisms »
Jacob Abernethy · Bhuvesh Kumar · Thodoris Lykouris · Yinglun Xu -
2021 : How does Over-Parametrization Lead to Acceleration for Learning a Single Teacher Neuron with Quadratic Activation? »
Jun-Kun Wang · Jacob Abernethy -
2022 : ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2023 Poster: Sequential Strategic Screening »
Lee Cohen · Saeed Sharifi-Malvajerdi · Kevin Stangl · Ali Vakilian · Juba Ziani -
2022 : ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2022 Poster: ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2022 Poster: Active Sampling for Min-Max Fairness »
Jacob Abernethy · Pranjal Awasthi · Matthäus Kleindessner · Jamie Morgenstern · Chris Russell · Jie Zhang -
2022 Spotlight: ActiveHedge: Hedge meets Active Learning »
Bhuvesh Kumar · Jacob Abernethy · Venkatesh Saligrama -
2022 Spotlight: Active Sampling for Min-Max Fairness »
Jacob Abernethy · Pranjal Awasthi · Matthäus Kleindessner · Jamie Morgenstern · Chris Russell · Jie Zhang -
2021 Poster: A Modular Analysis of Provable Acceleration via Polyak's Momentum: Training a Wide ReLU Network and a Deep Linear Network »
Jun-Kun Wang · Chi-Heng Lin · Jacob Abernethy -
2021 Spotlight: A Modular Analysis of Provable Acceleration via Polyak's Momentum: Training a Wide ReLU Network and a Deep Linear Network »
Jun-Kun Wang · Chi-Heng Lin · Jacob Abernethy -
2019 Poster: Competing Against Nash Equilibria in Adversarially Changing Zero-Sum Games »
Adrian Rivera Cardoso · Jacob Abernethy · He Wang · Huan Xu -
2019 Oral: Competing Against Nash Equilibria in Adversarially Changing Zero-Sum Games »
Adrian Rivera Cardoso · Jacob Abernethy · He Wang · Huan Xu