Timezone: »

 
Poster
Implicit Geometric Regularization for Learning Shapes
Amos Gropp · Lior Yariv · Niv Haim · Matan Atzmon · Yaron Lipman

Wed Jul 15 01:00 PM -- 01:45 PM & Thu Jul 16 02:00 AM -- 02:45 AM (PDT) @

Representing shapes as level-sets of neural networks has been recently proved to be useful for different shape analysis and reconstruction tasks. So far, such representations were computed using either: (i) pre-computed implicit shape representations; or (ii) loss functions explicitly defined over the neural level-sets.

In this paper we offer a new paradigm for computing high fidelity implicit neural representations directly from raw data (i.e., point clouds, with or without normal information). We observe that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level-set surfaces, avoiding bad zero-loss solutions. We provide a theoretical analysis of this property for the linear case, and show that, in practice, our method leads to state-of-the-art implicit neural representations with higher level-of-details and fidelity compared to previous methods.

Author Information

Amos Gropp (Weizmann Institute of Science)
Lior Yariv (Weizmann Institute of Science)
Niv Haim (Weizmann Institute of Science)
Matan Atzmon (Weizmann Institute of Science)
Yaron Lipman (Weizmann Institute of Science)

More from the Same Authors