Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities

Neighborhood Gradient Clustering: An Efficient Decentralized Learning Method for Non-IID Data

Sai Aparna Aketi · Sangamesh Kodge · Kaushik Roy


Abstract:

Decentralized learning algorithms enable the training of deep learning models over large distributed datasets, without the need for a central server. In practical scenarios, the distributed datasets can have significantly different data distributions across the agents. In this paper, we propose Neighborhood Gradient Clustering (NGC), a novel decentralized learning algorithm to improve decentralized learning over non-IID data. Specifically, the proposed method replaces the local gradients of the model with the weighted mean of self-gradients, model-variant cross-gradients, and data-variant cross-gradients. Model-variant cross-gradients are derivatives of the received neighbors’ model parameters with respect to the local dataset - computed locally. Data-variant cross-gradients are derivatives of the local model with respect to its neighbors’ datasets - received through communication. We demonstrate the efficiency of \textit{NGC} over non-IID data sampled from various vision datasets. Our experiments demonstrate that the proposed method either remains competitive or outperforms (by up to 6%) the existing state-of-the-art (SoTA) with significantly less compute and memory requirements.

Chat is not available.