Timezone: »

Distributed Deep Learning with MxNet Gluon
Alex Smola · Aran Khanna

Sat Aug 05 03:45 PM -- 06:00 PM (PDT) @ Cockle Bay

We present MxNet Gluon, an easy to use tool for designing a wide range of networks from image processing (LeNet, inception, etc.) to advanced NLP (TreeLSTM). It combines the convenience of imperative frameworks (PyTorch, Torch, Chainer) with efficient symbolic execution (TensorFlow, CNTK). The tutorial covers the following issues: basic distributed linear algebra with NDArray, automatic differentiation of code, and designing networks from scratch (and using Gluon). Subsequently we cover convenience and efficiency features such as automagic shape inference, deferred initialization and lazy evaluation, and hybridization of compute graphs. We then discuss structured architectures such as TreeLSTMs, which are key for natural language processing. We conclude by showing how to perform parallel and distributed training on multiple GPUs and multiple machines. For Jupyter notebooks and details see http://gluon.mxnet.io and https://github.com/zackchase/mxnet-the-straight-dope

Author Information

Alex Smola (Amazon)
Aran Khanna (Amazon)

Aran Khanna is an AI engineer in the deep learning research team at Amazon Web Services. Aran is the technical lead for deep learning services on Mobile, IoT and Edge devices, working to allow for deployment and management of efficient deep network models across a broad set of devices outside of the data center, from Raspberry Pis to smartphones to NVIDIA Jetsons. Aran recently graduated from Harvard's Computer Science department before joining the AWS team.

More from the Same Authors