Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing https://arxiv.org/abs/2401.03043
The dataset and our best pre-trained models are available at Google Drive
To download the EM image blocks:
cd dataset/download_fafb.py
# edit the block name csv path (provided in the link above) and destination path to yours
python download_fafb.py
The 4000 blocks require about 1TB of storage space.
Most of the dependencies are included in this docker image:
From registry.cn-hangzhou.aliyuncs.com/janechen/xp_projects:v1
Extra packages:
pip install connected-components-3d plyfile numba
training -- raw image, gt segmentation, initial over-segmentation;
testing -- raw image, initial over-segmentation
- Prepare image patches of positive connections for finetuning Connect-Embed using get_snemi_patch.py; The code produces patch image/GT/segmentation centered at the center of gravity of the connected area between segment pairs;
- Sample point cloud from segmentation get_pc_snemi3d.py;
- Fintune the image embedding model and run inference on the test set config
# train
python main.py --config-base configs/Image-Base.yaml --config-file configs/imageEmbedding/Image-Unet-SNEMI3D.yaml --checkpoint embedding_best_model.pth
# inference
python main.py --config-base configs/Image-Base.yaml --config-file configs/imageEmbedding/Image-Unet-SNEMI3D.yaml --checkpoint SNEMI_embedding_best.pth --inference INFERENCE.OUTPUT_PATH test SYSTEM.NUM_CPUS 0
- Map the computed embedding to the point cloud map_pc_snemi3d.py;
- Finetune the Pointnet++ refer to Pointnet/README
TODO: merge step 3 and 4
If you find our repository useful in your research, please consider citing:
@inproceedings{chen2024learning,
title={Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing},
author={Qihua Chen and Xuejin Chen and Chenxuan Wang and Yixiong Liu and Zhiwei Xiong and Feng Wu},
year={2024},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
}