Skip to content

fly519/ELGS2

Repository files navigation

Exploiting Local and Global Structure for Point Cloud Semantic Segmentation with Contextual Point Representation2

Code for the paper: Exploiting Local and Global Structure for Point Cloud Semantic Segmentation with Contextual Point Representation2

Introduction

we propose one novel model for point cloud semantic segmentation, which exploit the local and global structures within the point cloud based on the contextual point representations. Specifically, we enrich each point representation by performing one novel gated fusion on the point itself and its contextual points. Afterwards, based on the enriched representation, we propose one novel graph pointnet module (GPM), relying on the graph attention block (GAB) to dynamically compose and update each point representation within the local point cloud structure. Finally, we resort to the spatial-wise and channel-wise attention strategies to exploit the point cloud global structure and thereby yield the semantic label for each point.

Data download and process

Scene Semantic Segmentation

We provide the processed files, you can download S3DIS data here . To prepare your own S3DIS Dataset HDF5 files, refer to PointNet, you need to firstly download 3D indoor parsing dataset version 1.2 (S3DIS Dataset) and convert original data to data label files by

python collect_indoor3d_data.py

Finally run

python gen_indoor3d_h5.py

to downsampling and generate HDF5 files. You can change the number of points in the downsampling by modify this file.

Part Segmentation

The processed files of ShapeNet dataset can download here .

Dynamic Semantic Segmentation

The processed files of Synthia4D dataset can download here .

Model Training and Testing

The code is tested under TensorFlow 1.9.0 GPU version, Python 2.7.5, CUDA 9.0 and cuDNN 7.6.0 on Ubuntu 16.04. Here are some dependencies.

  • tensorflow-gpu (1.9.0)
  • python (2.7)
  • h5py
  • numpy
  • sklearn

Compile TF operators

  1. Find your tensorflow include path and cuda installation path.
python
import tensorflow as tf
tf.__path__
  1. modify complie files: tf_grouping_compile.sh,tf_sampling_compile.shand tf_interpolate_compile.sh.
  2. Compile the shared libraries.
cd tf_ops/3d_interpolation
./ tf_interpolate_compile.sh

cd tf_ops/grouping
./ tf_grouping_compile.sh

cd tf_ops/sampling
./ tf_sampling_compile.sh

Refer to PointNet++ for more details.

Scene Semantic Segmentation

When you have finished download processed data files or have prepared HDF5 files by yourself, to fill in your data path in the train.py. Then start training by:

cd sem_seg
python train.py

For S3DIS dataset, we tested on the area 5 by default. To get 6-fold results, run:

for((i=1;i<=6;i++)) \
do \
  python train_areas.py   --test_area  ${i}  --log_dir log/Area${i}  > train_and_test_area${i}.out 2>&1	 \
done \

you will get six models, each one of them is trained on five areas and tested on the other area.

After training, you can test model by:

python test.py --ckpt  your_ckpt_file  --ckpt_meta your_meta_file

Note that the best_seg_model chosen by test.py is only depend on overall accuracy(OA), maybe mIoU value is not the highest. Because the overall accuracy is not necessarily proportional to the mean IoU. You can test all saved model by:

python test_all_models.py

The operation of segment rooms in test set is already included in this file. We use the area5 by default as the test set. You can modify it in your own code.

We provide our trained model. When you finish your Data Processing, you can test our model by:

python test.py --ckpt trained_model/best_seg_model.ckpt  --ckpt_meta trained_model/best_seg_model.ckpt.meta

Part Segmentation

When you have finished download processed data files of the ShapeNet dataset, to fill in your data path in the train.py. Then start training by:

cd part_seg
python train.py

The training log and test results will be printed.

Dynamic Semantic Segmentation

When you have finished download processed data files of the Synthia4D dataset, to fill in your data path in the train_multi_gpu.py. Then start training by:

cd dy_seg
python train_multi_gpu.py

Citation

@inproceedings{fly519,
  title={Exploiting Local and Global Structure for Point Cloud Semantic Segmentation with Contextual Point Representations},
  author={Xu Wang, Jingming He and Lin Ma},
  booktitle={NeurIPS},
  year={2019},
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published