Skip to content

Latest commit

 

History

History
39 lines (28 loc) · 1.31 KB

TRAIN.md

File metadata and controls

39 lines (28 loc) · 1.31 KB

hdn Training Tutorial

This implements training of hdn.

Add hdn to your PYTHONPATH

export PYTHONPATH=/path/to/hdn:$PYTHONPATH

Prepare training dataset

Prepare training dataset, detailed preparations are listed in training_dataset directory.

Download pretrained backbones

Download pretrained backbones from here and put them in project_root/pretrained_models directory

Training

Multi-processing Distributed Data Parallel Training

Refer to Pytorch distributed training for detailed description.

Single node, multiple GPUs (We use 4 GPUs):

cd experiments/tracker_homo_config

set desired config in proj_e2e_GOT_unconstrained_v2.yaml

CUDA_VISIBLE_DEVICES=0,1,2,3
python -m torch.distributed.launch \
--nproc_per_node=4 \
--master_port=8845 \
../../tools/train.py --cfg proj_e2e_GOT_unconstrained_v2.yaml