Skip to yearly menu bar Skip to main content


Poster

Learning To Stop While Learning To Predict

Xinshi Chen · Hanjun Dai · Yu Li · Xin Gao · Le Song

Virtual

Keywords: [ Transfer, Multitask and Meta-learning ] [ Meta-learning and Automated ML ] [ Transfer and Multitask Learning ] [ Deep Generative Models ]


Abstract:

There is a recent surge of interest in designing deep architectures based on the update steps in traditional algorithms, or learning neural networks to improve and replace traditional algorithms. While traditional algorithms have certain stopping criteria for outputting results at different iterations, many algorithm-inspired deep models are restricted to a fixed-depth'' for all inputs. Similar to algorithms, the optimal depth of a deep architecture may be different for different input instances, either to avoidover-thinking'', or because we want to compute less for operations converged already. In this paper, we tackle this varying depth problem using a steerable architecture, where a feed-forward deep model and a variational stopping policy are learned together to sequentially determine the optimal number of layers for each input instance. Training such architecture is very challenging. We provide a variational Bayes perspective and design a novel and effective training procedure which decomposes the task into an oracle model learning stage and an imitation stage. Experimentally, we show that the learned deep model along with the stopping policy improves the performances on a diverse set of tasks, including learning sparse recovery, few-shot meta learning, and computer vision tasks.

Chat is not available.