Equivariant Neural Networks for General Linear Symmetries on Lie Algebras
Chankyo Kim ⋅ Sicheng Zhao ⋅ Minghan Zhu ⋅ Tzu-Yuan Lin ⋅ Maani Ghaffari
Abstract
Many scientific and geometric problems exhibit general linear symmetries, yet most equivariant neural networks are built for compact groups or simple vector features, limiting their reuse on matrix-valued data such as covariances, inertias, or shape tensors. We introduce \textbf{Reductive Lie Neurons (ReLNs)}, an exactly $\mathrm{GL}(n)$-equivariant architecture that natively supports matrix-valued and Lie-algebraic features. ReLNs resolve a central stability issue for reductive Lie algebras by introducing a non-degenerate adjoint (conjugation)-invariant bilinear form, enabling principled nonlinear interactions and invariant feature construction in a single architecture that \textit{transfers across subgroups without redesign}. We demonstrate ReLNs on algebraic tasks with $\mathfrak{sl}(3)$ and $\mathfrak{sp}(4)$ symmetries, Lorentz-equivariant particle physics, uncertainty-aware drone state estimation via joint velocity--covariance processing, learning from 3D Gaussian-splat representations, and EMLP double-pendulum benchmark spanning multiple symmetry groups. ReLNs consistently match or outperform strong equivariant and self-supervised baselines while using substantially fewer parameters and compute, improving the accuracy–efficiency trade-off and providing a practical, reusable backbone for learning with broad linear symmetries.
Successful Page Load