Timezone: »

Boosting the Throughput and Accelerator Utilization of Specialized CNN Inference Beyond Increasing Batch Size
Jack Kosaian · Amar Phanishayee · Matthai Philipose · Debadeepta Dey · Rashmi Vinayak

Tue Jul 20 07:20 AM -- 07:25 AM (PDT) @ None

Datacenter vision systems widely use small, specialized convolutional neural networks (CNNs) trained on specific tasks for high-throughput inference. These settings employ accelerators with massive computational capacity, but which specialized CNNs underutilize due to having low arithmetic intensity. This results in suboptimal application-level throughput and poor returns on accelerator investment. Increasing batch size is the only known way to increase both application-level throughput and accelerator utilization for inference, but yields diminishing returns; specialized CNNs poorly utilize accelerators even with large batch size. We propose FoldedCNNs, a new approach to CNN design that increases inference throughput and utilization beyond large batch size. FoldedCNNs rethink the structure of inputs and layers of specialized CNNs to boost arithmetic intensity: in FoldedCNNs, f images with C channels each are concatenated into a single input with fC channels and jointly classified by a wider CNN. Increased arithmetic intensity in FoldedCNNs increases the throughput and GPU utilization of specialized CNN inference by up to 2.5x and 2.8x, with accuracy close to the original CNN in most cases.

Author Information

Jack Kosaian (Carnegie Mellon University)
Amar Phanishayee (Microsoft Research)
Matthai Philipose (Microsoft Research)
Debadeepta Dey (Microsoft Research)
Rashmi Vinayak (CMU)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors