Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Geometry-grounded Representation Learning and Generative Modeling

Improving Equivariant Networks with Probabilistic Symmetry Breaking

Hannah Lawrence · Vasco Portilheiro · Yan Zhang · Sékou-Oumar Kaba

Keywords: [ symmetry-breaking ] [ Equivariance ] [ canonicalization ] [ symmetry ]


Abstract:

Equivariance builds known symmetries into neural networks, often improving generalization. However, equivariant networks cannot break self-symmetries present in any given input. This poses an important problem: (1) for prediction tasks on symmetric domains, and (2) for generative models, which must break symmetries in order to reconstruct from highly symmetric latent spaces. Thus, equivariant networks are fundamentally limited when applied to these contexts. To remedy this, we present a comprehensive, probabilistic framework for symmetry-breaking, based on a novel decomposition of equivariant distributions. Concretely, this decomposition yields a practical method for breaking symmetries in any equivariant network via randomized canonicalization, while retaining the inductive bias of symmetry. We experimentally show that our framework improves the performance of group-equivariant methods in modeling lattice spin systems and autoencoding graphs.

Chat is not available.