Skip to yearly menu bar Skip to main content


The Non-IID Data Quagmire of Decentralized Machine Learning

Kevin Hsieh · Amar Phanishayee · Onur Mutlu · Phillip Gibbons


Keywords: [ Large Scale Learning and Big Data ] [ Parallel and Distributed Learning ] [ Systems and Software ] [ Optimization - Large Scale, Parallel and Distributed ]


Many large-scale machine learning (ML) applications need to perform decentralized learning over datasets generated at different devices and locations. Such datasets pose a significant challenge to decentralized learning because their different contexts result in significant data distribution skew across devices/locations. In this paper, we take a step toward better understanding this challenge by presenting a detailed experimental study of decentralized DNN training on a common type of data skew: skewed distribution of data labels across locations/devices. Our study shows that: (i) skewed data labels are a fundamental and pervasive problem for decentralized learning, causing significant accuracy loss across many ML applications, DNN models, training datasets, and decentralized learning algorithms; (ii) the problem is particularly challenging for DNN models with batch normalization layers; and (iii) the degree of skewness is a key determinant of the difficulty of the problem. Based on these findings, we present SkewScout, a system-level approach that adapts the communication frequency of decentralized learning algorithms to the (skew-induced) accuracy loss between data partitions. We also show that group normalization can recover much of the skew-induced accuracy loss of batch normalization.

Chat is not available.