NeuralMarker: A Framework for Learning General Marker Correspondence
Zhaoyang Huang*, Xiaokun Pan*, Weihong Pan, Weikang Bian, Yan Xu, Ka Chun Cheung, Guofeng Zhang, Hongsheng Li
SIGGRAPH Asia (ToG) 2022
- Code release
- Models release
- Demo code release
- Dataset&Evaluation code release
conda create -n neuralmarker
conda activate neuralmarker
conda install python=3.7
pip install -r requirements.txt
We use the MegaDepth dataset that preprocessed by CAPS, which is provided in this link. We generate FlyingMarkers training set online. To genenerate FlyingMarkers validation set and test set, please execute:
python synthesis_datasets.py --root ./data/MegaDepth_CAPS/ --csv ./data/synthesis_validate_release.csv --save_dir ./data/flyingmarkers/validation
python synthesis_datasets.py --root ./data/MegaDepth_CAPS/ --csv ./data/synthesis_validate_short.csv --save_dir ./data/validation/synthesis
python synthesis_datasets.py --root ./data/MegaDepth_CAPS/ --csv ./data/synthesis_test_release.csv --save_dir ./data/flyingmarkers/test
The pretrained models, DVL-Markers benchmark, and data for demo are stored in Google Drive.
We train our model on 6 V100 with batch size 2.
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5 python train.py
Put the DVL-Markers dataset in data
:
├── data
├── DVL
├── D
├── V
├── L
├── marker
then run
bash eval_DVL.sh
The results will be saved in output
python evaluation_FM.py
for video demo, run
bash demo_video.sh
We thank Yijin Li, Rensen Xu, and Jundan Luo for their help. We refer DGC-Net to generate synthetic image pairs.