Using Open3D-ML/Pytorch to realize PointCloud Reconstruction, Segmentation and Object Detection, with Deep Learning Method and created our GUI.
By Group A2_09: Brian, Chris, and Risa
- Learn what is PointCloud and several formats of its dataset.
- Learn mainstream python algorithms to do computer vision tasks related to PointCloud, such as Tensorflow, Pytorch, Pytorch3D, Open3D.
- Compare different methods to create drone pcd files, in terms of speed, resolution, and point quality.(Open3D,PPKT real-time visualization)
- Set up complex environment with scc and wsl2(Ubuntu) and use OpenGL to rendering datasets.
- Test mesh-RCNN on scc.
- Build up pipeline, yaml, preprocess files, define different Class Method and use RandLa-Net model to realize Segmentation.
- Using Semantic3D and KITTI dataset to realize animation, segmentation and also object detection.
- Preprocess Optimizaition.
Create optimizaition algorithms in terms of data processing based on Open3D to help better performance(speed/quality) on all kinds of cv tasks.
Open3D is an open-source library that supports rapid development of software that deals with 3D data. The Open3D frontend exposes a set of carefully selected data structures and algorithms in both C++ and Python. The backend is highly optimized and is set up for parallelization. Open3D was developed from a clean slate with a small and carefully considered set of dependencies. It can be set up on different platforms and compiled from source with minimal effort. The code is clean, consistently styled, and maintained via a clear code review mechanism. Open3D has been used in a number of published research projects and is actively deployed in the cloud
[1]https://hermary.com/learning/3d-vision-data-look-like/?gclid=EAIaIQobChMIwonapPaK-gIVAvjICh2yxAKfEAAYAiAAEgIbOPD_BwE
[2]F. Chen, Y. Lu, B. Cai and X. Xie, "Multi-Drone Collaborative Trajectory Optimization for Large-Scale Aerial 3D Scanning," 2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), 2021, pp. 121-126, doi: 10.1109/ISMAR-Adjunct54149.2021.00034.
[3]J. -W. Han, D. -J. Synn, T. -H. Kim, H. -C. Chung and J. -K. Kim, "Feature Based Sampling: A Fast and Robust Sampling Method for Tasks Using 3D Point Cloud," in IEEE Access, vol. 10, pp. 58062-58070, 2022, doi: 10.1109/ACCESS.2022.3178519.
[4]X. Qu, J. Zhao, Y. Sun and L. Wang, "3D Reconstruction Method Based on Aerial Sequence of UAV," 2020 International Conference on Virtual Reality and Visualization (ICVRV), 2020, pp. 33-37, doi: 10.1109/ICVRV51359.2020.00017.
[5]D. Marchisotti and E. Zappa, "Uncertainty mitigation in drone-based 3D scanning of defects in concrete structures," 2022 IEEE International Instrumentation and Measurement Technology Conference, doi: 10.1109/I2MTC48687.2022.9806652.
[6]Y. Chen, S. Liu, X. Shen, and J. Jia, ‘‘Fast point R-CNN,’’ in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 9775–9784.
[7]R. Q. Charles, H. Su, M. Kaichun, and L. J. Guibas, ‘‘PointNet: Deep learning on point sets for 3D classification and segmentation,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 652–660.
[8]X. Yan, C. Zheng, Z. Li, S. Wang, and S. Cui, ‘‘PointASNL: Robust point clouds processing using nonlocal neural networks with adaptive sampling,’’ in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR),Jun. 2020, pp. 5589–5598.
[9]R. Huang, D. Zou, R. Vaughan, and P. Tan. Active image-based modeling with a toy drone. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 6124–6131. IEEE, 2018.
[10]C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan,V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun 2015,pp. 1–9.
[11]R. Klokov and V. Lempitsky, “Escape from cells: Deep kd-networks for the recognition of 3d point cloud models,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Oct 2017, pp. 863–872.
[12]Q. Xu, X. Sun, C.-Y. Wu, P. Wang, and U. Neumann, “Grid-GCN for fast and scalable point cloud learning,” in Proc. IEEE Conf. Comput.Vis. Pattern Recognit. (CVPR), Jun 2020, pp. 5661–5670.