Poster
Constant Stepsize Local GD for Logistic Regression: Acceleration by Instability
Michael Crawshaw · Blake Woodworth · Mingrui Liu
West Exhibition Hall B2-B3 #W-614
Machine learning is generally very powerful, but also very expensive in terms of resources such as computer power and data. To mitigate these resource requirements, it is common to train machine learning models in parallel across many devices, such as mobile phones, which helps by leveraging compute power and data from many users. In this paper, we study a classic algorithm for training machine learning models in this distributed manner, and prove that this algorithm can train certain machine learning models much faster than previously understood. Essentially, this acceleration comes from allowing the algorithm to be "unstable", which is usually considered inadvisable, but in our case the instability actually creates acceleration.
Live content is unavailable. Log in and register to view live content