Skip to yearly menu bar Skip to main content


Poster

On the Universality of Invariant Networks

Haggai Maron · Ethan Fetaya · Nimrod Segol · Yaron Lipman

Pacific Ballroom #74

Keywords: [ Deep Learning Theory ]


Abstract: Constraining linear layers in neural networks to respect symmetry transformations from a group G is a common design principle for invariant networks that has found many applications in machine learning. In this paper, we consider a fundamental question that has received very little attention to date: Can these networks approximate any (continuous) invariant function? We tackle the rather general case where GSn (an arbitrary subgroup of the symmetric group) that acts on \Rn by permuting coordinates. This setting includes several recent popular invariant networks. We present two main results: First, G-invariant networks are universal if high-order tensors are allowed. Second, there are groups G for which higher-order tensors are unavoidable for obtaining universality. G-invariant networks consisting of only first-order tensors are of special interest due to their practical value. We conclude the paper by proving a necessary condition for the universality of G-invariant networks that incorporate only first-order tensors. Lastly, we propose a conjecture stating that this condition is also sufficient.

Live content is unavailable. Log in and register to view live content