Timezone: »
We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point clouds, which is equivariant under continuous 3D roto-translations. Equivariance is important to ensure stable and predictable performance in the presence of nuisance transformations of the data input. A positive corollary of equivariance is increased weight-tying within the model, leading to fewer trainable parameters and thus decreased sample complexity (i.e. we need less training data). The SE(3)-Transformer leverages the benefits of self-attention to operate on large point clouds with varying number of points while guaranteeing SE(3)-equivariance for robustness. We achieve competitive performance on two real-world datasets, ScanObjectNN and QM9.
Teaser video | [ protected link dropped ]
Author Information
Fabian Fuchs (Oxford Robotics Insitute)
More from the Same Authors
-
2019 Poster: On the Limitations of Representing Functions on Sets »
Edward Wagstaff · Fabian Fuchs · Martin Engelcke · Ingmar Posner · Michael A Osborne -
2019 Oral: On the Limitations of Representing Functions on Sets »
Edward Wagstaff · Fabian Fuchs · Martin Engelcke · Ingmar Posner · Michael A Osborne