Skip to content

Latest commit

 

History

History
76 lines (60 loc) · 4.67 KB

README.md

File metadata and controls

76 lines (60 loc) · 4.67 KB

Quasi-Balanced Self-Training on Noise-Aware Synthesis of Object Point Clouds for Closing Domain Gap

Paper | Data | [Supplementary Materials]

This repository contains an implementation for the ECCV 2022 paper Quasi-Balanced Self-Training on Noise-Aware Synthesis of Object Point Clouds for Closing Domain Gap.

This paper introduces an integrated scheme consisting of physically realistic synthesis of object point clouds via rendering stereo images via projection of speckle patterns onto CAD models and a novel quasi-balanced self-training designed for more balanced data distribution by sparsity-driven selection of pseudo labeled samples for long tailed classes.

Installation Requirments

The code for Mesh2Point pipeline that generates noisy point clouds is compatible with blender 2.93, which also complile a default python environment. We use our Mesh2Point to scan the ModelNet dataset then get a noisy point cloud dataset--namely SpeckleNet, and the generated datasets are available here (SpeckleNet10 for 10 categories and SpeckleNet40 for 40 categories).

If you want to scan your own 3D models, please download blender blender 2.93 and install the required python libraries in blender's python environment by running:

path_to_blender/blender-2.93.0-linux-x64/2.93/python/bin/pip -r install Mesh2Point_environment.yml

Meanwhile, we also release the code for Quasi-Balanced Self-Training (QBST), which is compatible with python xx
and pytorch xx
.

You can create an anaconda environment called QBST with the required dependencies by running:

conda env create -f QBST_environment.yml
conda activate QBST

Usage

Obtain noisy data

Data

We use our Mesh2Point pipeline to scan ModelNet and generate a new dataset SpeckleNet. Note that blender cannot import ModelNet's original file format, so we convert Object File Format (.off) to Wavefront OBJ Format (.obj). The converted version ModelNet40_OBJ is available here.

You can also scan your own 3D model dataset using:

CUDA_VISIBLE_DEVICES=0 path_to_blender/blender-2.93.0-linux-x64/blender ./blend_file/spot.blend -b --python scan_models.py --  --view=1 --modelnet_dir=path_to_model_dataset --category_list=bed

Notice that you need to organize your own data in the same architecture as ModelNet.

Training with ordinary model

To be done...

Training with QBST

To be done...

Evaluation on ScanNet10 and DepthScanNet10

ScanNet10 is a realistic dataset generated by PointDAN. It is extracted from a smooth mesh dataset that reconstructed from noisy depth frames. DepthScanNet10 is directly extracted from noisy depth frames sequence, which keep more noisy points and therefore more realistic than ScanNet10. Both two datasets use depth frames sequence from ScanNet.

For evaluating model on ScanNet10 and DepthScanNet10, running:

To be Done...

Citation

If you find our work useful in your research, please consider citing:

    @article{chen2022quasi,
    title={Quasi-Balanced Self-Training on Noise-Aware Synthesis of Object Point Clouds for Closing Domain Gap},
    author={Chen, Yongwei and Wang, Zihao and Zou, Longkun and Chen, Ke and Jia, Kui},
    journal={arXiv preprint arXiv:2203.03833},
    year={2022}
    }

TODO

  • update scannet10 link
  • update modelnet_obj link
  • upload QBST code
  • upload ordinary code
  • generate environment.yml
  • update supplental link