Skip to content

Levishery/Flywire-Neuron-Tracing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

71 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

flywire_NeuronRec

Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing https://arxiv.org/abs/2401.03043

Dataset Access

FlyTracing pairwise segment connection dataset

The dataset and our best pre-trained models are available at Google Drive

FAFB EM image

To download the EM image blocks:

cd dataset/download_fafb.py
# edit the block name csv path (provided in the link above) and destination path to yours
python download_fafb.py

The 4000 blocks require about 1TB of storage space.

Environment

Most of the dependencies are included in this docker image:

From registry.cn-hangzhou.aliyuncs.com/janechen/xp_projects:v1

Extra packages:

pip install connected-components-3d plyfile numba 

Fintune the models on SNEMI3D

Input:

training -- raw image, gt segmentation, initial over-segmentation;

testing -- raw image, initial over-segmentation

Steps:

  1. Prepare image patches of positive connections for finetuning Connect-Embed using get_snemi_patch.py; The code produces patch image/GT/segmentation centered at the center of gravity of the connected area between segment pairs;
  2. Sample point cloud from segmentation get_pc_snemi3d.py;
  3. Fintune the image embedding model and run inference on the test set config
# train
python main.py --config-base configs/Image-Base.yaml --config-file configs/imageEmbedding/Image-Unet-SNEMI3D.yaml --checkpoint embedding_best_model.pth
# inference
python main.py --config-base configs/Image-Base.yaml --config-file configs/imageEmbedding/Image-Unet-SNEMI3D.yaml --checkpoint SNEMI_embedding_best.pth --inference INFERENCE.OUTPUT_PATH test SYSTEM.NUM_CPUS 0
  1. Map the computed embedding to the point cloud map_pc_snemi3d.py;
  2. Finetune the Pointnet++ refer to Pointnet/README

TODO: merge step 3 and 4

Citation

If you find our repository useful in your research, please consider citing:

@inproceedings{chen2024learning,
      title={Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing}, 
      author={Qihua Chen and Xuejin Chen and Chenxuan Wang and Yixiong Liu and Zhiwei Xiong and Feng Wu},
      year={2024},
      booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published