Timezone: »
We propose a generalizable neural radiance fields - MonoNeRF, that can be trained on large-scale monocular videos of moving in static scenes without any ground-truth annotations of depth and camera poses. MonoNeRF follows an Autoencoder-based architecture, where the encoder estimates the monocular depth and the camera pose, and the decoder constructs a Multiplane NeRF representation based on the depth encoder feature, and renders the input frames with the estimated camera. The learning is supervised by the reconstruction error. Once the model is learned, it can be applied to multiple applications including depth estimation, camera pose estimation, and single-image novel view synthesis. More qualitative results are available at: https://oasisyang.github.io/mononerf.
Author Information
Yang Fu (University of California, San Diego)
Ishan Misra (Meta AI)
Xiaolong Wang (UC San Diego)

Our group has a broad interest around the directions of Computer Vision, Machine Learning and Robotics. Our focus is on learning 3D and dynamics representations through videos and physical robotic interaction data. We explore various means of supervision signals from the data itself, language, and common sense knowledge. We leverage these comprehensive representations to facilitate the learning of robot skills, with the goal of generalizing the robot to interact effectively with a wide range of objects and environments in the real physical world. Please check out our individual research topic of Self-Supervised Learning, Video Understanding, Common Sense Reasoning, RL and Robotics, 3D Interaction, Dexterous Hand.
More from the Same Authors
-
2021 : Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation »
Nicklas Hansen · Hao Su · Xiaolong Wang -
2021 : Disentangled Attention as Intrinsic Regularization for Bimanual Multi-Object Manipulation »
Minghao Zhang · Pingcheng Jian · Yi Wu · Harry (Huazhe) Xu · Xiaolong Wang -
2021 : Learning Vision-Guided Quadrupedal Locomotionwith Cross-Modal Transformers »
Ruihan Yang · Minghao Zhang · Nicklas Hansen · Harry (Huazhe) Xu · Xiaolong Wang -
2023 Poster: Learning Dense Correspondences between Photos and Sketches »
Xuanchen Lu · Xiaolong Wang · Judith E. Fan -
2023 Poster: On Pre-Training for Visuo-Motor Control: Revisiting a Learning-from-Scratch Baseline »
Nicklas Hansen · Zhecheng Yuan · Yanjie Ze · Tongzhou Mu · Aravind Rajeswaran · Hao Su · Huazhe Xu · Xiaolong Wang -
2023 Tutorial: Self-Supervised Learning in Vision: from Research Advances to Best Practices »
Xinlei Chen · Ishan Misra · Randall Balestriero · Mathilde Caron · Christoph Feichtenhofer · Mark Ibrahim -
2022 Poster: Temporal Difference Learning for Model Predictive Control »
Nicklas Hansen · Hao Su · Xiaolong Wang -
2022 Spotlight: Temporal Difference Learning for Model Predictive Control »
Nicklas Hansen · Hao Su · Xiaolong Wang -
2021 Workshop: Self-Supervised Learning for Reasoning and Perception »
Pengtao Xie · Shanghang Zhang · Ishan Misra · Pulkit Agrawal · Katerina Fragkiadaki · Ruisi Zhang · Tassilo Klein · Asli Celikyilmaz · Mihaela van der Schaar · Eric Xing -
2021 Poster: Barlow Twins: Self-Supervised Learning via Redundancy Reduction »
Jure Zbontar · Li Jing · Ishan Misra · yann lecun · Stephane Deny -
2021 Spotlight: Barlow Twins: Self-Supervised Learning via Redundancy Reduction »
Jure Zbontar · Li Jing · Ishan Misra · yann lecun · Stephane Deny -
2021 Poster: Compositional Video Synthesis with Action Graphs »
Amir Bar · Roi Herzig · Xiaolong Wang · Anna Rohrbach · Gal Chechik · Trevor Darrell · Amir Globerson -
2021 Spotlight: Compositional Video Synthesis with Action Graphs »
Amir Bar · Roi Herzig · Xiaolong Wang · Anna Rohrbach · Gal Chechik · Trevor Darrell · Amir Globerson -
2020 Poster: Deep Isometric Learning for Visual Recognition »
Haozhi Qi · Chong You · Xiaolong Wang · Yi Ma · Jitendra Malik