Timezone: »

 
Oral
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Mingxing Tan · Quoc Le

Tue Jun 11 05:00 PM -- 05:05 PM (PDT) @ Seaside Ballroom

Convolutional Neural Networks (ConvNets) are commonly developed at a fixed computational cost, and then scaled up for better accuracy if more resources are given. Conventional practice is to arbitrarily make ConvNets deeper or wider, or use larger image resolution, but is there a more principled method to scale up a ConvNet? In this paper, we systematically study this problem and identify that carefully balancing network depth, width, and resolution can lead to better accuracy and efficiency. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of network depth/width/resolution using a simple yet highly effective compound coefficient. Results show our method improves the performance on scaling up prior MobileNets. To further demonstrate the effectiveness of our scaling method, we also develop a new mobile-size EMNAS-B0 baseline, and scale it up to achieve state-of-the-art 84.4% top-1 / 97.1% top-5 accuracy on ImageNet, but being 8.4x smaller and 6x faster on inference than the best existing ConvNet (Huang et al., 2018). Our scaled EMNAS models also achieve new state-of-the-art accuracy on five commonly used transfer learning datasets, such as CIFAR-100 (91.7%) and Flowers (98.8%), with an order of magnitude fewer parameters.

Author Information

Mingxing Tan (Google Brain)
Quoc Le (Google Brain)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors