Timezone: »

Understanding the unstable convergence of gradient descent
Kwangjun Ahn · Jingzhao Zhang · Suvrit Sra

Tue Jul 19 03:30 PM -- 05:30 PM (PDT) @ Hall E #607
Most existing analyses of (stochastic) gradient descent rely on the condition that for $L$-smooth costs, the step size is less than $2/L$. However, many works have observed that in machine learning applications step sizes often do not fulfill this condition, yet (stochastic) gradient descent still converges, albeit in an unstable manner. We investigate this unstable convergence phenomenon from first principles, and discuss key causes behind it. We also identify its main characteristics, and how they interrelate based on both theory and experiments, offering a principled view toward understanding the phenomenon.

Author Information

Kwangjun Ahn (MIT EECS)
Jingzhao Zhang (Tsinghua University)
Suvrit Sra (MIT & Macro-Eyes)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors