Timezone: »

 
Poster
Learning To Stop While Learning To Predict
Xinshi Chen · Hanjun Dai · Yu Li · Xin Gao · Le Song

Tue Jul 14 10:00 AM -- 10:45 AM & Tue Jul 14 09:00 PM -- 09:45 PM (PDT) @ Virtual

There is a recent surge of interest in designing deep architectures based on the update steps in traditional algorithms, or learning neural networks to improve and replace traditional algorithms. While traditional algorithms have certain stopping criteria for outputting results at different iterations, many algorithm-inspired deep models are restricted to a fixed-depth'' for all inputs. Similar to algorithms, the optimal depth of a deep architecture may be different for different input instances, either to avoidover-thinking'', or because we want to compute less for operations converged already. In this paper, we tackle this varying depth problem using a steerable architecture, where a feed-forward deep model and a variational stopping policy are learned together to sequentially determine the optimal number of layers for each input instance. Training such architecture is very challenging. We provide a variational Bayes perspective and design a novel and effective training procedure which decomposes the task into an oracle model learning stage and an imitation stage. Experimentally, we show that the learned deep model along with the stopping policy improves the performances on a diverse set of tasks, including learning sparse recovery, few-shot meta learning, and computer vision tasks.

Author Information

Xinshi Chen (Georgia Institution of Technology)
Hanjun Dai (Google Brain)
Yu Li (King Abdullah University of Science and Technology)
Xin Gao (Kaust)
Le Song (Georgia Institute of Technology)

More from the Same Authors