Skip to yearly menu bar Skip to main content


Poster

Orthogonalized SGD and Nested Architectures for Anytime Neural Networks

Chengcheng Wan · Henry (Hank) Hoffmann · Shan Lu · Michael Maire

Keywords: [ Architectures ] [ Optimization ] [ Deep Learning - General ]


Abstract:

We propose a novel variant of SGD customized for training network architectures that support anytime behavior: such networks produce a series of increasingly accurate outputs over time. Efficient architectural designs for these networks focus on re-using internal state; subnetworks must produce representations relevant for both imme- diate prediction as well as refinement by subse- quent network stages. We consider traditional branched networks as well as a new class of re- cursively nested networks. Our new optimizer, Orthogonalized SGD, dynamically re-balances task-specific gradients when training a multitask network. In the context of anytime architectures, this optimizer projects gradients from later out- puts onto a parameter subspace that does not in- terfere with those from earlier outputs. Experi- ments demonstrate that training with Orthogonal- ized SGD significantly improves generalization accuracy of anytime networks.

Chat is not available.