Workshop
Joint Workshop on On-Device Machine Learning & Compact Deep Neural Network Representations (ODML-CDNNR)
Sujith Ravi · Zornitsa Kozareva · Lixin Fan · Max Welling · Yurong Chen · Werner Bailer · Brian Kulis · Haoji Hu · Jonathan Dekhtiar · Yingyan Lin · Diana Marculescu
Fri 14 Jun, 8:30 a.m. PDT
This joint workshop aims to bring together researchers, educators, practitioners who are interested in techniques as well as applications of on-device machine learning and compact, efficient neural network representations. One aim of the workshop discussion is to establish close connection between researchers in the machine learning community and engineers in industry, and to benefit both academic researchers as well as industrial practitioners. The other aim is the evaluation and comparability of resource-efficient machine learning methods and compact and efficient network representations, and their relation to particular target platforms (some of which may be highly optimized for neural network inference). The research community has still to develop established evaluation procedures and metrics.
The workshop also aims at reproducibility and comparability of methods for compact and efficient neural network representations, and on-device machine learning. Contributors are thus encouraged to make their code available. The workshop organizers plan to make some example tasks and datasets available, and invite contributors to use them for testing their work. In order to provide comparable performance evaluation conditions, the use of a common platform (such as Google Colab) is intended.
Schedule
Fri 8:30 a.m. - 8:40 a.m.
|
Welcome and Introduction
(
Talk
)
>
|
🔗 |
Fri 8:40 a.m. - 9:10 a.m.
|
Hardware Efficiency Aware Neural Architecture Search and Compression
(
Invited talk
)
>
|
Song Han 🔗 |
Fri 9:10 a.m. - 9:40 a.m.
|
Structured matrices for efficient deep learning
(
Invited talk
)
>
|
Sanjiv Kumar 🔗 |
Fri 9:40 a.m. - 10:00 a.m.
|
DeepCABAC: Context-adaptive binary arithmetic coding for deep neural network compression
(
Talk
)
>
|
Simon Wiedemann 🔗 |
Fri 10:00 a.m. - 10:30 a.m.
|
Poster spotlight presentations
(
Talk
)
>
|
🔗 |
Fri 10:30 a.m. - 11:00 a.m.
|
Coffee Break AM
|
🔗 |
Fri 11:00 a.m. - 11:30 a.m.
|
Understanding the Challenges of Algorithm and Hardware Co-design for Deep Neural Networks
(
Invited talk
)
>
|
Vivienne Sze 🔗 |
Fri 11:30 a.m. - 11:50 a.m.
|
Dream Distillation: A Data-Independent Model Compression Framework
(
Talk
)
>
|
Kartikeya Bhardwaj 🔗 |
Fri 11:50 a.m. - 12:10 p.m.
|
The State of Sparsity in Deep Neural Networks
(
Talk
)
>
|
Trevor Gale 🔗 |
Fri 12:10 p.m. - 12:40 p.m.
|
Lunch break
|
🔗 |
Fri 12:40 p.m. - 2:00 p.m.
|
Poster session
(
Posters
)
>
|
Cong Hao · Zhongqiu Lin · Chengcheng Li · Lars Ruthotto · Bin Yang · Deepthi Karkada 🔗 |
Fri 2:00 p.m. - 2:30 p.m.
|
DNN Training and Inference with Hyper-Scaled Precision
(
Invited talk
)
>
|
Kailash Gopalakrishnan 🔗 |
Fri 2:30 p.m. - 3:00 p.m.
|
Mixed Precision Training & Inference
(
Invited talk
)
>
|
Jonathan Dekhtiar 🔗 |
Fri 3:00 p.m. - 3:30 p.m.
|
Coffee Break PM
|
🔗 |
Fri 3:30 p.m. - 3:50 p.m.
|
Learning Compact Neural Networks Using Ordinary Differential Equations as Activation Functions
(
Talk
)
>
|
🔗 |
Fri 3:50 p.m. - 4:10 p.m.
|
Triplet Distillation for Deep Face Recognition
(
Talk
)
>
|
🔗 |
Fri 4:10 p.m. - 4:30 p.m.
|
Single-Path NAS: Device-Aware Efficient ConvNet Design
(
Talk
)
>
|
Dimitrios Stamoulis 🔗 |
Fri 4:30 p.m. - 5:30 p.m.
|
Panel discussion
(
Discussion
)
>
|
🔗 |
Fri 5:30 p.m. - 5:45 p.m.
|
Wrap-up and Closing
(
Talk
)
>
|
🔗 |