Skip to yearly menu bar Skip to main content


Workshop

Dynamic Neural Networks

Tomasz Trzcinski · marco levorato · Simone Scardapane · Bradley McDanel · Andrea Banino · Carlos Riquelme Ruiz

Ballroom 1

Fri 22 Jul, 6 a.m. PDT

Deep networks have shown outstanding scaling properties both in terms of data and model sizes: larger does better. Unfortunately, the computational cost of current state-of-the-art methods is prohibitive. A number of new techniques have recently arisen to address and improve this fundamental quality-cost trade-off. For instance, methods like conditional computation, adaptive computation, dynamic model sparsification, and early-exit approaches are all aimed at addressing the above-mentioned quality-cost trade-off. This workshop explores such exciting and practically-relevant research avenues.More specifically, as part of contributed content we will invite high-quality papers on the following topics: dynamic routing, mixture-of-experts models, early-exit methods, conditional computations, capsules and object-oriented learning, reusable components, online network growing and pruning, online neural architecture search and applications of dynamic networks (continual learning, wireless/embedded devices and similar).The workshop is planned as a whole day event and will feature 2 keynote talks, a mix of panel discussion, contributed and invited talks, and a poster session. The invited speakers cover a diverse range of research fields (machine learning, computer vision, neuroscience, natural language processing) and backgrounds (academic, industry) and include speakers from underrepresented groups. All speakers confirmed their talks and the list ranges from senior faculty members (Gao Huang, Tinne Tuytelaars) to applied and theoretical research scientists (Weinan Sun, Francesco Locatello). The workshop builds on a set of previous workshops previously run at prime venues, such as CVPR, NeurIPS and ICLR.

Chat is not available.
Timezone: America/Los_Angeles

Schedule