Skip to yearly menu bar Skip to main content


Poster

PolyGen: An Autoregressive Generative Model of 3D Meshes

Charlie Nash · Yaroslav Ganin · S. M. Ali Eslami · Peter Battaglia

Keywords: [ Deep Learning - Generative Models and Autoencoders ] [ Structured Prediction ] [ Deep Sequence Models ] [ Deep Generative Models ] [ Computer Vision ]


Abstract:

Polygon meshes are an efficient representation of 3D geometry, and are of central importance in computer graphics, robotics and games development. Existing learning-based approaches for object synthesis have avoided the challenges of working with 3D meshes, instead using alternative object representations that are more compatible with neural architectures and training approaches. We present PolyGen, a generative model of 3D objects which models the mesh directly, predicting vertices and faces sequentially using a Transformer-based architecture. Our model can condition on a range of inputs, including object classes, voxels, and images, and because the model is probabilistic it can produce samples that capture uncertainty in ambiguous scenarios. We show that the model is capable of producing high-quality, usable meshes, and establish log-likelihood benchmarks for the mesh-modelling task. We also evaluate the conditional models on surface reconstruction metrics against alternative methods, and demonstrate competitive performance despite not training directly on this task.

Chat is not available.