Poster
in
Workshop: 2nd Annual Workshop on Topology, Algebra, and Geometry in Machine Learning (TAG-ML)
Equivariant Representation Learning with Equivariant Convolutional Kernel Networks
Soutrik Roy Chowdhury · Johan Suykens
Convolutional Kernel Networks (CKNs) were proposed as multilayered representation learning models that are based on stacking multiple Reproducing Kernel Hilbert Spaces (RKHSs) in a hierarchical manner. CKN has been studied to understand the (near) group invariance and (geometric) deformation stability properties of deep convolutional representations by exploiting the geometry of corresponding RKHSs. The objective of this paper is two-fold: (1) Analyzing the construction of group equivariant Convolutional Kernel Networks (equiv-CKNs) that induce in the model symmetries like translation, rotation etc., (2) Understandingthe deformation stability of equiv-CKNs that takes into account the geometry of inductive biases and that of RKHSs. Multiple kernel based construction ofequivariant representations might be helpful in understanding the geometric model complexity of equivariant CNNs as well as shed lights on the construction practicalities of robust equivariant networks.