Timezone: »
High sensitivity of neural architecture search (NAS) methods against their input such as step-size (i.e., learning rate) and search space prevents practitioners from applying them out-of-the-box to their own problems, albeit its purpose is to automate a part of tuning process. Aiming at a fast, robust, and widely-applicable NAS, we develop a generic optimization framework for NAS. We turn a coupled optimization of connection weights and neural architecture into a differentiable optimization by means of stochastic relaxation. It accepts arbitrary search space (widely-applicable) and enables to employ a gradient-based simultaneous optimization of weights and architecture (fast). We propose a stochastic natural gradient method with an adaptive step-size mechanism built upon our theoretical investigation (robust). Despite its simplicity and no problem-dependent parameter tuning, our method exhibited near state-of-the-art performances with low computational budgets both on image classification and inpainting tasks.
Author Information
Youhei Akimoto (University of Tsukuba / RIKEN AIP)
Shinichi Shirakawa (Yokohama National University)
Nozomu Yoshinari (Yokohama National University)
Kento Uchida (Yokohama National University)
Shota Saito (Yokohama National University)
Kouhei Nishida (Shinshu University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Adaptive Stochastic Natural Gradient Method for One-Shot Neural Architecture Search »
Tue. Jun 11th 11:25 -- 11:30 PM Room Hall B