Poster
Geometry-Informed Neural Networks
Arturs Berzins · Andreas Radler · Eric Volkmann · Sebastian Sanokowski · Sepp Hochreiter · Johannes Brandstetter
West Exhibition Hall B2-B3 #W-504
Geometry is a ubiquitous tool in computer graphics, design, and engineering. However, the lack of large shape datasets limits the application of state-of-the-art supervised learning methods and motivates the exploration of alternative learning strategies. To this end, we introduce geometry-informed neural networks (GINNs) -- a framework for training shape-generative neural fields without data by leveraging user-specified design requirements in the form of objectives and constraints. By adding diversity as an explicit constraint, GINNs avoid mode-collapse and can generate multiple diverse solutions, often required in geometry tasks. Experimentally, we apply GINNs to several problems spanning physics, geometry, and engineering design, showing control over geometrical and topological properties, such as surface smoothness or the number of holes. These results demonstrate the potential of training shape-generative models without data, paving the way for new generative design approaches without large datasets.
Creating machine-learning systems that can generate new shapes, like airplane parts or structural components, usually requires vast amounts of high-quality data. But such data is often unavailable in real-world engineering workflows. What if, instead, we could teach computers to generate useful, varied shapes without needing any examples at all?We explore Geometry-Informed Neural Networks (GINNs) -- models that learn purely from user-defined design requirements, such as manufacturability, weight, or attachment points. Rather than learning from data, GINNs are trained to satisfy these constraints directly. To reflect the need for design exploration, we also encourage the model to generate multiple distinct solutions. Surprisingly, this leads the model to discover meaningful and structured variations in shape, without being told what those variations should look like.This points to a promising new direction for generative design in data-scarce environments, which are common in engineering. More broadly, it offers an alternative paradigm for training generative models -- one that minimizes reliance on data, whether due to scarcity, exclusivity, or copyright concerns.
Live content is unavailable. Log in and register to view live content