Skip to yearly menu bar Skip to main content


Poster

DAdaQuant: Doubly-adaptive quantization for communication-efficient Federated Learning

Robert Hönig · Yiren Zhao · Robert Mullins

Hall E #406

Keywords: [ OPT: Large Scale, Parallel and Distributed ] [ Deep Learning ]


Abstract: Federated Learning (FL) is a powerful technique to train a model on a server with data from several clients in a privacy-preserving manner. FL incurs significant communication costs because it repeatedly transmits the model between the server and clients. Recently proposed algorithms quantize the model parameters to efficiently compress FL communication. We find that dynamic adaptations of the quantization level can boost compression without sacrificing model quality. We introduce DAdaQuant as a doubly-adaptive quantization algorithm that dynamically changes the quantization level across time and different clients. Our experiments show that DAdaQuant consistently improves client$\rightarrow$server compression, outperforming the strongest non-adaptive baselines by up to $2.8\times$.

Chat is not available.