You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @Verg-Avesta, I tried to reproduce your pre-training + fine-tuning process, but my results are still different if I use the base MAE model mae_vit_base_patch16, even using the pretrained weights mentioned in issue #6, and even after the fixes suggested in issue #23: I get MAE 13.95 and RMSE 90.25.
On the other hand, if I use the large MAE model mae_vit_large_patch16 I obtain MAE 12.58 and RMSE 87.25, which are closer to the results discussed in the aforementioned issue (MAE: 12.44, RMSE: 89.86), but this isn't mentioned anywhere, as far as I know.
What lets me think that this may be the reason of the difference, besides the fact that the other parameters seems to be the same indicated in the paper or in readme/issues, is the observation that the size of the fine-tuned weights you uploaded on drive (FSC147.pth) is 1.2GB, while the size of my fine-tuned model is ~500MB, as already noticed in issue #7, as far as I can understand using Google Translate.
Other combinations may work as well, e.g. base MAE for pre-training and large MAE for fine-tuning, but I haven't still tried it.
Here are the parameters I used, in case I missed something
Hello, I am also confused why the checkpoints everyone gets are smaller than the ones I provided. I didn't use mae_vit_large_patch16 in finetuning(at least I didn't know if I used it). And according to issue #7, he output the value.size() and value.dtype of the model part in his checkpoint and mine, and found that they are the same. Therefore, I guess it might be due to the different versions of some libraries that the parameters such as optimizer are not saved. You can try to output and compare the parameters in the two checkpoints, and check whether the two are consistent.
For the results, the results in the paper are the best results I got, and I think a MAE between 12~13 is OK for mae_vit_base_patch16. The reproducing result of issue #7 is MAE: 13.89, RMSE: 82.74 and he ran my checkpoints and got a result of MAE: 12.44, RMSE: 89.86. So the results are not very different.
Hope my description can help you find the problem.
Hi @Verg-Avesta, I tried to reproduce your pre-training + fine-tuning process, but my results are still different if I use the base MAE model
mae_vit_base_patch16
, even using the pretrained weights mentioned in issue #6, and even after the fixes suggested in issue #23: I get MAE 13.95 and RMSE 90.25.On the other hand, if I use the large MAE model
mae_vit_large_patch16
I obtain MAE 12.58 and RMSE 87.25, which are closer to the results discussed in the aforementioned issue (MAE: 12.44, RMSE: 89.86), but this isn't mentioned anywhere, as far as I know.What lets me think that this may be the reason of the difference, besides the fact that the other parameters seems to be the same indicated in the paper or in readme/issues, is the observation that the size of the fine-tuned weights you uploaded on drive (
FSC147.pth
) is 1.2GB, while the size of my fine-tuned model is ~500MB, as already noticed in issue #7, as far as I can understand using Google Translate.Other combinations may work as well, e.g. base MAE for pre-training and large MAE for fine-tuning, but I haven't still tried it.
Here are the parameters I used, in case I missed something
for pre-training:
and fine-tuning:
Does it sound reasonable? Maybe you run a fine-tuning with the large MAE?
Thanks in advance
The text was updated successfully, but these errors were encountered: