A Tight Theory of Error Feedback Algorithms in Distributed Optimization
Abstract
Communication costs are a major bottleneck in distributed learning and first-order optimization. A common approach to alleviate this issue is to compress the gradient information exchanged between agents. However, such compression typically degrades the convergence guarantees of gradient-based methods. Error feedback mechanisms provide a simple and computationally cheap remedy for this issue, but numerous variants have been proposed, and their relative performance remains poorly understood. In this work, we provide tight convergence analyses for two of the main error feedback algorithms in the literature, namely error feedback and EF21. Our results hold independently of the number of participating agents and rely on the construction of novel Lyapunov functions that recover the known best guarantees in the single-agent regime.