Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Fri Jun 14 08:30 AM -- 06:00 PM (PDT) @ 203
Joint Workshop on On-Device Machine Learning & Compact Deep Neural Network Representations (ODML-CDNNR)
Sujith Ravi · Zornitsa Kozareva · Lixin Fan · Max Welling · Yurong Chen · Werner Bailer · Brian Kulis · Haoji Hu · Jonathan Dekhtiar · Yingyan Lin · Diana Marculescu





Workshop Home Page

This joint workshop aims to bring together researchers, educators, practitioners who are interested in techniques as well as applications of on-device machine learning and compact, efficient neural network representations. One aim of the workshop discussion is to establish close connection between researchers in the machine learning community and engineers in industry, and to benefit both academic researchers as well as industrial practitioners. The other aim is the evaluation and comparability of resource-efficient machine learning methods and compact and efficient network representations, and their relation to particular target platforms (some of which may be highly optimized for neural network inference). The research community has still to develop established evaluation procedures and metrics.

The workshop also aims at reproducibility and comparability of methods for compact and efficient neural network representations, and on-device machine learning. Contributors are thus encouraged to make their code available. The workshop organizers plan to make some example tasks and datasets available, and invite contributors to use them for testing their work. In order to provide comparable performance evaluation conditions, the use of a common platform (such as Google Colab) is intended.

Welcome and Introduction (Talk)
Hardware Efficiency Aware Neural Architecture Search and Compression (Invited talk)
Structured matrices for efficient deep learning (Invited talk)
DeepCABAC: Context-adaptive binary arithmetic coding for deep neural network compression (Talk)
Poster spotlight presentations (Talk)
Coffee Break AM (Break)
Understanding the Challenges of Algorithm and Hardware Co-design for Deep Neural Networks (Invited talk)
Dream Distillation: A Data-Independent Model Compression Framework (Talk)
The State of Sparsity in Deep Neural Networks (Talk)
Lunch break (Break)
Poster session (Posters)
DNN Training and Inference with Hyper-Scaled Precision (Invited talk)
Mixed Precision Training & Inference (Invited talk)
Coffee Break PM (Break)
Learning Compact Neural Networks Using Ordinary Differential Equations as Activation Functions (Talk)
Triplet Distillation for Deep Face Recognition (Talk)
Single-Path NAS: Device-Aware Efficient ConvNet Design (Talk)
Panel discussion (Discussion)
Wrap-up and Closing (Talk)