Skip to yearly menu bar Skip to main content


Poster

Cross-Domain 3D Equivariant Image Embeddings

Carlos Esteves · Avneesh Sud · Zhengyi Luo · Kostas Daniilidis · Ameesh Makadia

Pacific Ballroom #25

Keywords: [ Transfer and Multitask Learning ] [ Semi-supervised learning ] [ Representation Learning ] [ Computer Vision ] [ Architectures ]


Abstract:

Spherical convolutional networks have been introduced recently as tools to learn powerful feature representations of 3D shapes. Spherical CNNs are equivariant to 3D rotations making them ideally suited to applications where 3D data may be observed in arbitrary orientations. In this paper we learn 2D image embeddings with a similar equivariant structure: embedding the image of a 3D object should commute with rotations of the object. We introduce a cross-domain embedding from 2D images into a spherical CNN latent space. This embedding encodes images with 3D shape properties and is equivariant to 3D rotations of the observed object. The model is supervised only by target embeddings obtained from a spherical CNN pretrained for 3D shape classification. We show that learning a rich embedding for images with appropriate geometric structure is sufficient for tackling varied applications, such as relative pose estimation and novel view synthesis, without requiring additional task-specific supervision.

Live content is unavailable. Log in and register to view live content