Skip to yearly menu bar Skip to main content


Poster

Data-free Neural Representation Compression with Riemannian Neural Dynamics

Zhengqi Pei · Anran Zhang · Shuhui Wang · Xiangyang Ji · Qingming Huang

Hall C 4-9 #1902
[ ] [ Paper PDF ]
[ Slides [ Poster
Thu 25 Jul 4:30 a.m. PDT — 6 a.m. PDT
 
Oral presentation: Oral 6D Representation Learning 2
Thu 25 Jul 7:30 a.m. PDT — 8:30 a.m. PDT

Abstract: Neural models are equivalent to dynamic systems from a physics-inspired view, implying that computation on neural networks can be interpreted as the dynamical interactions between neurons. However, existing work models neuronal interaction as a weight-based linear transformation, and the nonlinearity comes from the nonlinear activation functions, which leads to limited nonlinearity and data-fitting ability of the whole neural model. Inspired by Riemannian geometry, we interpret neural structures by projecting neurons onto the Riemannian neuronal state space and model neuronal interaction with Riemannian metric (${\it RieM}$), which provides a more efficient neural representation with higher parameter efficiency. With ${\it RieM}$, we further design a novel data-free neural compression mechanism that does not require additional fine-tuning with real data. Using backbones like ResNet and Vision Transformer, we conduct extensive experiments on datasets such as MNIST, CIFAR-100, ImageNet-1k, and COCO object detection. Empirical results show that, under equal compression rates and computational complexity, models compressed with ${\it RieM}$ achieve superior inference accuracy compared to existing data-free compression methods.

Chat is not available.