-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to reproduce CenterTrack inference results for DIVO test set? #15
Comments
Hi, here is my "test.json"
You can get the results from |
Thank you for your quick response!! When I used shared models in huggingface in CenterTrack, I observed that crowdjuman.pth performaned better than crowdhuman_divo.pth. I had assumed that crowdhuman_divo.pth model is fine-tuned crowdhuman.pth on DIVOTrack dataset. Am I wrong? https://huggingface.co/datasets/syhao777/DIVOTrack/tree/main/Single_view_Tracking Sincerely, |
Hi, could you provide more details? We only released one model of the CenterTrack in huggingface, which is named "crowdhuman.pth". Do you mean that the results of this model are worse? |
Hello, sorry for the confusion, I have models shared by you on Google Drive and Huggingface so I confused you by naming of the models. I have two models for CenterTrack :
I assumed that 262MB version is finetuned version of crowdhuman. However when I observe the output of these models, I see that larger model performs worse. Here the command to observe the performances: "python demo.py tracking --load_model /home/fafaf/phd/DIVOTrack/Single_view_Tracking/CenterTrack/models/crowdhuman.pth --num_class 1 --demo /home/fafaf/phd/DIVOTrack/datasets/DIVO/images/test/Circle_View1/img1" |
The first is the pre-trained model provided by CenterTrack. Have you tried this command?
And test the results by this tool? |
Actually, after getting poor results (low MOTA HOTA scores in TrackEval) for divo_test.sh script, I run demo.py to see visually. The model you shared(model_last.pt) is 239,9MB and is the best performant. Performance wise order of the models is this: model_last (239,9) > crowdhuman.pt (80MB) > crowdhuman.pt (262MB) |
Hello again, There is poor performance for FairMOT with "model_fairmot.pth" as well. Would you mind if I ask you to check trained models? |
Hi, 'model_fairmot.pth' is correct. For the inference results, you should make sure that the format is 'fid, pid, xmin, ymin, w, h, 1, -1, -1, -1'. And for the 'View2', you should resize the results to 1920*1080, eg., (xmin * 1920)/3640, (ymin * 1080)/2048, (w * 1920)/3640, (h * 1080)/2048. This is because ‘View2’ has a resolution of 3640 * 2048 during data collection, and we resized the resolution of all data to 1920 * 1080 in ground truth. Besides, we also upload the resize.py for pre-processing. |
Hello,
I am unable to use "divo_test.py" script in CenterTrack to have inference results to evaluate in TrackEval. This script cannot read and parse test.json file correctly to infer in every test image. I printed the read file paths and see that it reads only 1600 images to infer.
test-path.txt
It creates same annotations for different sequences. For example for sequences Circle_View1 and Circle_View2, annotations are the same
Circle_View1.txt
Circle_View2.txt
Possible problems:
Would you please help reproduce evaluation results for CenterTrack?
Thanks
The text was updated successfully, but these errors were encountered: