Distributed Stochastic $K$-Level Optimization Over Networks
Xinwen Zhang ⋅ Yihan Zhang ⋅ Hongchang Gao ⋅ Heng Huang
Abstract
In recent years, decentralized optimization has gained significant attention for solving machine learning problems where data are distributed across multiple devices. However, existing decentralized optimization algorithms are primarily designed for single-level and two-level optimization tasks, limiting their application to more complex problems such as decentralized stochastic $K$-level optimization, where $K>2$. In this work, we propose a novel decentralized stochastic $K$-level variance-reduced gradient descent algorithm to address the significant computation and communication overhead caused by the multi-level structure in decentralized stochastic $K$-level optimization problems. Moreover, we propose a novel theoretical analysis to tackle the recursive dependence issue caused by the multi-level structure when establishing the convergence rate of our algorithm. Finally, the experimental results confirm the effectiveness of our proposed algorithm.
Successful Page Load