You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
Thank you for opensource this pretrained models.
I finetuned 32x32, 32x16 and 32x8 on Cub-200 dataset with 2 1080Ti, which only achive 87.5% accuracy on testset with image size 448x448. Image size with 224x224 only get 84.43% test acc. I find biger batch size got better performance but I only have 2 GPUs.
In the paper with 32x16 pretrained model reached 89.2 accuracy.
Could you please show more details about finetune.
Thanks.
The text was updated successfully, but these errors were encountered:
@narrowsnap what is the batch size per gpu u r using for the models? Since this is model trained on 940M images and 1.5k labels, you should expect an accuracy of about 87.9% or similar. Also for cUB it is important to do param sweep on LR, weight decay and LR schedule.
Thank you for opensource this pretrained models.
I finetuned 32x32, 32x16 and 32x8 on Cub-200 dataset with 2 1080Ti, which only achive 87.5% accuracy on testset with image size 448x448. Image size with 224x224 only get 84.43% test acc. I find biger batch size got better performance but I only have 2 GPUs.
In the paper with 32x16 pretrained model reached 89.2 accuracy.
Could you please show more details about finetune.
Thanks.
The text was updated successfully, but these errors were encountered: