On the Convergence of Decentralized Stochastic Minimax Optimization Algorithm with Compressed Communication
Abstract
The stochastic minimax optimization problem has widespread applications in machine learning. Recently, numerous distributed minimax optimization algorithms have been developed to handle distributed training data. However, most of these algorithms suffer from high communication costs. To address this issue, we develop a novel communication-efficient decentralized stochastic gradient descent ascent with momentum algorithm based on the error feedback mechanism. Importantly, our algorithm demonstrates how to balance the full-precision update and the compression residual with novel designs for coefficients regarding variables and gradients to guarantee convergence. However, compressing the primal and dual variables (and their gradients) of stochastic minimax optimization problems with the error feedback mechanism presents significant challenges for convergence analysis. In particular, it incurs the circle dependence among consensus errors and compression errors. To overcome this challenge, we propose novel strategies that enable the establishment of the convergence rate for our algorithm. Our theoretical results demonstrate how the compression operator influences the convergence rate. Finally, extensive experimental results confirm the efficacy of our proposed algorithm.