Gradient Descent on Neurons and its Link to Approximate Second-order Optimization

Frederik Benzing

Hall E #639

Keywords: [ DL: Theory ] [ T: Optimization ] [ OPT: Everything Else ] [ OPT: Higher order ] [ MISC: Everything Else ]

[ Abstract ]
[ Paper PDF
Wed 20 Jul 3:30 p.m. PDT — 5:30 p.m. PDT
Spotlight presentation: Miscellaneous Aspects of Machine Learning/Reinforcement Learning
Wed 20 Jul 1:30 p.m. PDT — 3 p.m. PDT


Second-order optimizers are thought to hold the potential to speed up neural network training, but due to the enormous size of the curvature matrix, they typically require approximations to be computationally tractable. The most successful family of approximations are Kronecker-Factored, block-diagonal curvature estimates (KFAC). Here, we combine tools from prior work to evaluate exact second-order updates with careful ablations to establish a surprising result: Due to its approximations, KFAC is not closely related to second-order updates, and in particular, it significantly outperforms true second-order updates. This challenges widely held believes and immediately raises the question why KFAC performs so well. Towards answering this question we present evidence strongly suggesting that KFAC approximates a first-order algorithm, which performs gradient descent on neurons rather than weights. Finally, we show that this optimizer often improves over KFAC in terms of computational cost and data-efficiency.

Chat is not available.