Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Geometry-grounded Representation Learning and Generative Modeling

The Price of Freedom: Exploring Tradeoffs between Expressivity and Computational Efficiency in Equivariant Tensor Products

YuQing Xie · Ameya Daigavane · Mit Kotak · Tess Smidt

Keywords: [ Equivariance ] [ asymptotics ] [ gaunt ] [ tensor product ]


Abstract: $E(3)$-equivariant neural networks have recently demonstrated success across a wide range of 3D modelling tasks. A fundamental operation in these networks is the tensor product that interacts two geometric features in an equivariant manner to create new features. Due to the high computational complexity of the tensor product, significant effort has been invested to optimize the runtime of this operation. For example, Luo et. al proposed the Gaunt tensor product, promising a significant speedup over the naive implementation of the tensor product. Here, we perform a careful systematic analysis of the runtimes and expressivity of different tensor product implementations. We applied this analysis to Clebsch-Gordan tensor product (CGTP), Gaunt tensor product (GTP), and Fused Tensor product (FTP). We find that the naive implementation of CGTP can be improved by leveraging sparsity of the Clebsch-Gordan coefficients. Further, we show that the original implementation proposed in Luo et. al using 2D Fourier basis can be improved by projecting to the sphere $S^2$ instead which we call grid GTP. In addition, we show that the improvements of GTP and FTP come at a cost of expressivity compared to CGTP. In fact, in some settings they are asymptotically slower than the sparse version of CGTP. Finally, we provide some experimental runtimes for CGTP and GTP. Our code is available at https://github.com/atomicarchitects/PriceOfFreedom.

Chat is not available.