Skip to content

(2020-2021) We return a 3d shape reconstruction mesh from a input point cloud, while the neural network operates directly on the point cloud.

Notifications You must be signed in to change notification settings

municola/surface-reconstruction

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ReconRLA: Point-based large-scale Surface Reconstruction from Point Clouds [PDF]

This is my Bachelor Thesis.
It is based on the Reconstruction Pipeline from Lombardo et al. and RandLA from Hu et al.

Advisor: Sandro Lombardi
Supervisor: Prof. Dr. Marc Pollefeys

Abstract

3D shape reconstruction from point clouds is a crucial part in many computer graphics and vision applications like autonomous driving or video games. While classical methods suffer from large computational efforts or memory footprints for detailed reconstructions, recent approaches use deep neural nets to implicitly represent the input shape, which enables them to generate possibly infinite resolution meshes. Although these approaches return state-ofthe-art results on synthetic data sets, most do not scale to larger scenes. They either fail to produce accurate representations or become computationally intractable for larger point clouds. Implicit based methods that do scale use fast hierarchical grid structures, which have complex implementations and additional overhead. In this work we present a pipeline that operates directly on point clouds, which makes our method scalable and efficient. Further it is a lightweight and simple architecture that can be easily extended. Our approach gives good qualitative and quantitative results on synthetic datasets and generalizes to large-scale scenes. Lastly we show that our neural net generates comparable results to state-of-the-art reconstructions on noisy real-world data

Task

Given a 3D input point cloud, return a 3D mesh reconstruction.

For example we get (up to multiple millions) of points that lie on the surface of a chair. By applying our reconstruction Pipeline, we can then return the corresponding 3D mesh representation, hence a 3D model of the chair.

Implementation details

Arichtecture: VAE + GNN (The whole Architecture is a variational autoencoder. In the encoding step we use an attention-based Graph neural network to assign each point a feature vector, which we then use in the implicit decoder to predict a signed distance function value. Based on these predictions a marching cubes algorithm then returns the 3D mesh.)

Performance: We peform on par with DeepSDF (MIT, Facebook Reality Labs) and outperform OccNet (University of Tübingen, Google Brain)

Frameworks: Pytorch, Pytorch-lightning, Hydra

Programming Languge: Python

About

(2020-2021) We return a 3d shape reconstruction mesh from a input point cloud, while the neural network operates directly on the point cloud.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages