Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Geometry-grounded Representation Learning and Generative Modeling

Geometry-informed Neural Networks

Arturs Berzins · Andreas Radler · Sebastian Sanokowski · Sepp Hochreiter · Johannes Brandstetter

Keywords: [ conditional neural fields ] [ theory-informed learning ] [ Generative Design ] [ implicit neural representation ] [ Physics-Informed Neural Networks ]


Abstract:

We introduce the concept of geometry-informed neural networks (GINNs), which encompasses (i) learning under geometric constraints, (ii) neural fields as a suitable representation, and (iii) generating diverse solutions to under-determined systems as often found in geometric tasks. Notably, GINNs are formulated for scenarios where no training data is required, and as such can be considered as generative modeling driven purely by constraints. We enforce sample diversity as a remedy to mode collapse. We formulate GINNs on various problems by considering several differentiable losses. In particular, we use Morse theory to optimize for discrete requirements such as connectedness of components. Experimentally, we demonstrate the efficacy of the GINN learning paradigm across a range of two and three-dimensional scenarios, exhibiting varying levels of complexity.

Chat is not available.