Timezone: »

Cross-Domain 3D Equivariant Image Embeddings
Carlos Esteves · Avneesh Sud · Zhengyi Luo · Kostas Daniilidis · Ameesh Makadia

Thu Jun 13 04:35 PM -- 04:40 PM (PDT) @ Hall A

Spherical convolutional networks have been introduced recently as tools to learn powerful feature representations of 3D shapes. Spherical CNNs are equivariant to 3D rotations making them ideally suited for applications where 3D data may be observed in arbitrary orientations. In this paper we learn 2D image embeddings with a similar equivariant structure: embedding the image of a 3D object should commute with rotations of the object. We introduce a cross-domain embedding from 2D images into a spherical CNN latent space. Our model is supervised only by target embeddings obtained from a spherical CNN pretrained for 3D shape classification. The trained model learns to encode images with 3D shape properties and is equivariant to 3D rotations of the observed object. We show that learning only a rich embedding for images with appropriate geometric structure is in and of itself sufficient for tackling numerous applications. Evidence from two different applications, relative pose estimation and novel view synthesis, demonstrates that equivariant embeddings are sufficient for both applications without requiring any task-specific supervised training.

Author Information

Carlos Esteves (University of Pennsylvania)
Avneesh Sud (Google)
Zhengyi Luo (University of Pennsylvania)
Kostas Daniilidis (University of Pennsylvania)
Ameesh Makadia (Google Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors