Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Geometry-grounded Representation Learning and Generative Modeling

SINR: Equivariant Neural Vector Fields

David Ruhe · Patrick Forré

Keywords: [ Equivariance ] [ neural field ] [ rotation ] [ nerf ] [ reflection ] [ translation ] [ scaling ] [ implicit neural representation ]


Abstract:

Several works have in recent years applied equivariant neural fields in, e.g., computer vision and scientific machine learning. However, those works usually restricted themselves to predicting scalar quantities and considered only a single type of symmetry. We study Steerable Implicit Neural Representations (SINRs) from a more general perspective, accommodating scalar, vector, or even multivector or tensor fields while considering equivariances with respect to translations, scaling, rotations, and reflections. Empirically, we explore promising use cases, such as estimating PDE solutions in scientific machine learning, optical flow fields in 2D vision, or surface normals in 3D vision. In the 3D vision case, we furthermore apply a new way to obtain occupancy grids through a classification task, avoiding the typical expensive preprocessing steps required to obtain ground truth occupancy values. Our work highlights the versatility and potential of SINRs in advancing these domains.

Chat is not available.