Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

On Convergence of Incremental Gradient for Non-convex Smooth Functions

Anastasiia Koloskova · Nikita Doikov · Sebastian Stich · Martin Jaggi

Hall C 4-9 #1008

Abstract: In machine learning and neural network optimization, algorithms like incremental gradient, single shuffle SGD, and random reshuffle SGD are popular due to their cache-mismatch efficiency and good practical convergence behavior. However, their optimization properties in theory, especially for non-convex smooth functions, remain incompletely explored. This paper delves into the convergence properties of SGD algorithms with arbitrary data ordering, within a broad framework for non-convex smooth functions. Our findings show enhanced convergence guarantees for incremental gradient and single shuffle SGD. Particularly if n is the training set size, we improve n times the optimization term of convergence guarantee to reach accuracy ϵ from O(nϵ) to O(1ϵ).

Chat is not available.