Orthogonalized SGD and Nested Architectures for Anytime Neural Networks

Chengcheng Wan · Henry (Hank) Hoffmann · Shan Lu · Michael Maire

Keywords: [ Optimization ] [ Architectures ] [ Deep Learning - General ]

[ Abstract ]
Wed 15 Jul 3 p.m. PDT — 3:45 p.m. PDT
Thu 16 Jul 2 a.m. PDT — 2:45 a.m. PDT


We propose a novel variant of SGD customized for training network architectures that support anytime behavior: such networks produce a series of increasingly accurate outputs over time. Efficient architectural designs for these networks focus on re-using internal state; subnetworks must produce representations relevant for both imme- diate prediction as well as refinement by subse- quent network stages. We consider traditional branched networks as well as a new class of re- cursively nested networks. Our new optimizer, Orthogonalized SGD, dynamically re-balances task-specific gradients when training a multitask network. In the context of anytime architectures, this optimizer projects gradients from later out- puts onto a parameter subspace that does not in- terfere with those from earlier outputs. Experi- ments demonstrate that training with Orthogonal- ized SGD significantly improves generalization accuracy of anytime networks.

Chat is not available.