Timezone: »

Poster
Topology-aware Generalization of Decentralized SGD
Tongtian Zhu · Fengxiang He · Lan Zhang · Zhengyang Niu · Mingli Song · Dacheng Tao

Tue Jul 19 03:30 PM -- 05:30 PM (PDT) @ Hall E #1221
This paper studies the algorithmic stability and generalizability of decentralized stochastic gradient descent (D-SGD). We prove that the consensus model learned by D-SGD is $\mathcal{O}{(m/N\unaryplus1/m\unaryplus\lambda^2)}$-stable in expectation in the non-convex non-smooth setting, where $N$ is the total sample size of the whole system, $m$ is the worker number, and $1\unaryminus\lambda$ is the spectral gap that measures the connectivity of the communication topology. These results then deliver an $\mathcal{O}{(1/N\unaryplus{({(m^{-1}\lambda^2)}^{\frac{\alpha}{2}}\unaryplus m^{\unaryminus\alpha})}/{N^{1\unaryminus\frac{\alpha}{2}}})}$ in-average generalization bound, which is non-vacuous even when $\lambda$ is closed to $1$, in contrast to vacuous as suggested by existing literature on the projected version of D-SGD. Our theory indicates that the generalizability of D-SGD has a positive correlation with the spectral gap, and can explain why consensus control in initial training phase can ensure better generalization. Experiments of VGG-11 and ResNet-18 on CIFAR-10, CIFAR-100 and Tiny-ImageNet justify our theory. To our best knowledge, this is the first work on the topology-aware generalization of vanilla D-SGD. Code is available at \url{https://github.com/Raiden-Zhu/Generalization-of-DSGD}.

#### Author Information

##### Fengxiang He (JD.com Inc)

Fengxiang He received his BSc in statistics from University of Science and Technology of China and MPhil and PhD in computer science from the University of Sydney. He is currently an algorithm scientist at JD Explore Academy, JD.com Inc, leading its trustworthy AI team. His research interest is theory and practice of trustworthy AI, including deep learning theory, privacy preservation, and fairness. He has published in top conferences and journals, including ICML, NeurIPS, ICLR, CVPR, ICCV, UAI, AAAI, IJCAI, TNNLS, TCSVT, TMM, and Neural Computation.