Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to reproduce CenterTrack inference results for DIVO test set? #15

Open
sompt22 opened this issue Sep 24, 2023 · 8 comments
Open

How to reproduce CenterTrack inference results for DIVO test set? #15

sompt22 opened this issue Sep 24, 2023 · 8 comments

Comments

@sompt22
Copy link

sompt22 commented Sep 24, 2023

Hello,

I am unable to use "divo_test.py" script in CenterTrack to have inference results to evaluate in TrackEval. This script cannot read and parse test.json file correctly to infer in every test image. I printed the read file paths and see that it reads only 1600 images to infer.

test-path.txt

It creates same annotations for different sequences. For example for sequences Circle_View1 and Circle_View2, annotations are the same

Circle_View1.txt
Circle_View2.txt

Possible problems:

  1. I am not successful to create test.json file by using "convert_divo_to_coco.py" script?
  2. "test.py" file cannot read and load related images to be inferred?

Would you please help reproduce evaluation results for CenterTrack?

Thanks

@shengyuhao
Copy link
Owner

Hi, here is my "test.json"
test.zip
generated by the convert_divo_to_coco.py. There are no same annotations. And "test.py" is also working with the command as follows:

cd src
python test.py tracking --exp_id divo --dataset divo --pre_hm --ltrb_amodal --track_thresh 0.4 --pre_thresh 0.5 --resume

You can get the results from ${CenterTrack_ROOT}/exp/tracking/divo/results_divo/.

@sompt22
Copy link
Author

sompt22 commented Sep 25, 2023

Thank you for your quick response!!

When I used shared models in huggingface in CenterTrack, I observed that crowdjuman.pth performaned better than crowdhuman_divo.pth. I had assumed that crowdhuman_divo.pth model is fine-tuned crowdhuman.pth on DIVOTrack dataset. Am I wrong?

https://huggingface.co/datasets/syhao777/DIVOTrack/tree/main/Single_view_Tracking

Sincerely,

@shengyuhao
Copy link
Owner

Thank you for your quick response!!

When I used shared models in huggingface in CenterTrack, I observed that crowdjuman.pth performaned better than crowdhuman_divo.pth. I had assumed that crowdhuman_divo.pth model is fine-tuned crowdhuman.pth on DIVOTrack dataset. Am I wrong?

https://huggingface.co/datasets/syhao777/DIVOTrack/tree/main/Single_view_Tracking

Sincerely,

Hi, could you provide more details? We only released one model of the CenterTrack in huggingface, which is named "crowdhuman.pth". Do you mean that the results of this model are worse?

@sompt22
Copy link
Author

sompt22 commented Sep 25, 2023

Hello,

sorry for the confusion, I have models shared by you on Google Drive and Huggingface so I confused you by naming of the models.

I have two models for CenterTrack :

  1. crowdhuman.pth 80MB
  2. crowdhuman.pth 262MB

I assumed that 262MB version is finetuned version of crowdhuman. However when I observe the output of these models, I see that larger model performs worse. Here the command to observe the performances:

"python demo.py tracking --load_model /home/fafaf/phd/DIVOTrack/Single_view_Tracking/CenterTrack/models/crowdhuman.pth --num_class 1 --demo /home/fafaf/phd/DIVOTrack/datasets/DIVO/images/test/Circle_View1/img1"

@shengyuhao
Copy link
Owner

The first is the pre-trained model provided by CenterTrack. Have you tried this command?

cd src
python test.py tracking --exp_id divo --dataset divo --pre_hm --ltrb_amodal --track_thresh 0.4 --pre_thresh 0.5 --resume

And test the results by this tool?
If not correct, test this model with the same password.

@sompt22
Copy link
Author

sompt22 commented Sep 25, 2023

Actually, after getting poor results (low MOTA HOTA scores in TrackEval) for divo_test.sh script, I run demo.py to see visually.

The model you shared(model_last.pt) is 239,9MB and is the best performant.

Performance wise order of the models is this:

model_last (239,9) > crowdhuman.pt (80MB) > crowdhuman.pt (262MB)

@sompt22
Copy link
Author

sompt22 commented Sep 25, 2023

Hello again,

There is poor performance for FairMOT with "model_fairmot.pth" as well.

Would you mind if I ask you to check trained models?

@shengyuhao
Copy link
Owner

Hello again,

There is poor performance for FairMOT with "model_fairmot.pth" as well.

Would you mind if I ask you to check trained models?

Hi, 'model_fairmot.pth' is correct. For the inference results, you should make sure that the format is 'fid, pid, xmin, ymin, w, h, 1, -1, -1, -1'. And for the 'View2', you should resize the results to 1920*1080, eg., (xmin * 1920)/3640, (ymin * 1080)/2048, (w * 1920)/3640, (h * 1080)/2048. This is because ‘View2’ has a resolution of 3640 * 2048 during data collection, and we resized the resolution of all data to 1920 * 1080 in ground truth.

Besides, we also upload the resize.py for pre-processing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants