Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Fri Jul 22 06:00 AM -- 05:00 PM (PDT) @ Ballroom 1 None
Dynamic Neural Networks
Tomasz Trzcinski · marco levorato · Simone Scardapane · Bradley McDanel · Andrea Banino · Carlos Riquelme Ruiz





Workshop Home Page

Deep networks have shown outstanding scaling properties both in terms of data and model sizes: larger does better. Unfortunately, the computational cost of current state-of-the-art methods is prohibitive. A number of new techniques have recently arisen to address and improve this fundamental quality-cost trade-off. For instance, methods like conditional computation, adaptive computation, dynamic model sparsification, and early-exit approaches are all aimed at addressing the above-mentioned quality-cost trade-off. This workshop explores such exciting and practically-relevant research avenues.More specifically, as part of contributed content we will invite high-quality papers on the following topics: dynamic routing, mixture-of-experts models, early-exit methods, conditional computations, capsules and object-oriented learning, reusable components, online network growing and pruning, online neural architecture search and applications of dynamic networks (continual learning, wireless/embedded devices and similar).The workshop is planned as a whole day event and will feature 2 keynote talks, a mix of panel discussion, contributed and invited talks, and a poster session. The invited speakers cover a diverse range of research fields (machine learning, computer vision, neuroscience, natural language processing) and backgrounds (academic, industry) and include speakers from underrepresented groups. All speakers confirmed their talks and the list ranges from senior faculty members (Gao Huang, Tinne Tuytelaars) to applied and theoretical research scientists (Weinan Sun, Francesco Locatello). The workshop builds on a set of previous workshops previously run at prime venues, such as CVPR, NeurIPS and ICLR.

Welcome note (Welcome note by workshop organizers)
Spatially and Temporally Adaptive Neural Networks (Virtual Keynote)
Where to look next ? Different strategies for image exploration under partial observability. (Virtual invited talk)
Incorporating Dynamic Structures into Pre-trained Language Models (Virtual invited talk)
Does Continual Learning Equally Forget All Parameters? (Spotlight)
PA-GNN: Parameter-Adaptive Graph Neural Networks (Spotlight)
Triangular Dropout: Variable Network Width without Retraining (Spotlight)
A Theoretical View on Sparsely Activated Networks (Spotlight)
Lunch break (Break)
Dynamic neural networks: Present and Future (Discussion Panel)
Organizing memories for generalization in complementary learning systems (Keynote)
Inductive Biases for Object-Centric Representations in the Presence of Complex Textures (Poster)
Neural Architecture Search with Loss Flatness-aware Measure (Poster)
Dynamic Split Computing for Efficient Deep Edge Intelligence (Poster)
Back to the Source: Test-Time Diffusion-Driven Adaptation (Poster)
Vote for Nearest Neighbors Meta-Pruning of Self-Supervised Networks (Poster)
FLOWGEN: Fast and slow graph generation (Poster)
Noisy Heuristics NAS: A Network Morphism based Neural Architecture Search using Heuristics (Poster)
Learning Modularity for Generalizable Robotic Behaviors (Poster)
The Spike Gating Flow: A Hierarchical Structure Based Spiking Neural Network for Spatiotemporal Computing (Poster)
APP: Anytime Progressive Pruning (Poster)
Dynamic Transformer Networks (Poster)
Fault-Tolerant Collaborative Inference through the Edge-PRUNE Framework (Poster)
Connectivity Properties of Neural Networks Under Performance-Resources Trade-off (Poster)
Deep Policy Generators (Poster)
FedHeN: Federated Learning in Heterogeneous Networks (Poster)
Just-in-Time Sparsity: Learning Dynamic Sparsity Schedules (Poster)
HARNAS: Neural Architecture Search Jointly Optimizing for Hardware Efficiency and Adversarial Robustness of Convolutional and Capsule Networks (Poster)
SnapStar Algorithm: a new way to ensemble Neural Networks (Poster)
Provable Hierarchical Lifelong Learning with a Sketch-based Modular Architecture (Poster)
Confident Adaptive Language Modeling (Poster)
Single, Practical and Fast Dynamic Truncation Kernel Multiplication (Poster)
Parameter efficient dendritic-tree neurons outperform perceptrons (Poster)
Is a Modular Architecture Enough? (Poster)
A Product of Experts Approach to Early-Exit Ensembles (Poster)
Deriving modular inductive biases from the principle of independent mechanisms (Invited talk)
Supernet Training for Federated Image Classification (Spotlight)
Achieving High TinyML Accuracy through Selective Cloud Interactions (Spotlight)
Slimmable Quantum Federated Learning (Spotlight)
Sparse Relational Reasoning with Object-centric Representations (Spotlight)
Play It Cool: Dynamic Shifting Prevents Thermal Throttling (Spotlight)
Efficient Sparsely Activated Transformers (Spotlight)
Networking & happy hour (Break)