Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Poster
Thu Jul 12 09:15 AM -- 12:00 PM (PDT) @ Hall B #50
Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates
Dong Yin · Yudong Chen · Kannan Ramchandran · Peter Bartlett
[ PDF

In this paper, we develop distributed optimization algorithms that are provably robust against Byzantine failures---arbitrary and potentially adversarial behavior, in distributed computing systems, with a focus on achieving optimal statistical performance. A main result of this work is a sharp analysis of two robust distributed gradient descent algorithms based on median and trimmed mean operations, respectively. We prove statistical error rates for all of strongly convex, non-strongly convex, and smooth non-convex population loss functions. In particular, these algorithms are shown to achieve order-optimal statistical error rates for strongly convex losses. To achieve better communication efficiency, we further propose a median-based distributed algorithm that is provably robust, and uses only one communication round. For strongly convex quadratic loss, we show that this algorithm achieves the same optimal error rate as the robust distributed gradient descent algorithms.