## Exploiting Redundancy: Separable Group Convolutional Networks on Lie Groups

### David Knigge · David Romero · Erik Bekkers

##### Hall E #437

Keywords: [ APP: Computer Vision ] [ MISC: General Machine Learning Techniques ] [ MISC: Representation Learning ] [ MISC: Supervised Learning ] [ T: Deep Learning ] [ DL: Other Representation Learning ]

[ Abstract ]
[ [
Thu 21 Jul 3 p.m. PDT — 5 p.m. PDT

Spotlight presentation: Deep Learning
Thu 21 Jul 12:30 p.m. PDT — 2 p.m. PDT

Abstract: Group convolutional neural networks (G-CNNs) have been shown to increase parameter efficiency and model accuracy by incorporating geometric inductive biases. In this work, we investigate the properties of representations learned by regular G-CNNs, and show considerable parameter redundancy in group convolution kernels. This finding motivates further weight-tying by sharing convolution kernels over subgroups. To this end, we introduce convolution kernels that are separable over the subgroup and channel dimensions. In order to obtain equivariance to arbitrary affine Lie groups we provide a continuous parameterisation of separable convolution kernels. We evaluate our approach across several vision datasets, and show that our weight sharing leads to improved performance and computational efficiency. In many settings, separable G-CNNs outperform their non-separable counterpart, while only using a fraction of their training time. In addition, thanks to the increase in computational efficiency, we are able to implement G-CNNs equivariant to the $\mathrm{Sim(2)}$ group; the group of dilations, rotations and translations of the plane. $\mathrm{Sim(2)}$-equivariance further improves performance on all tasks considered, and achieves state-of-the-art performance on rotated MNIST.

Chat is not available.