You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your great work and I hope to discuss it with you in AAAI-20 Technical Program on Feb. if applicable!
One question is that I am curious about the total time you need for training the whole DIM training set. It seems like you only include the total iterations for the training phase in your paper.
Regards,
Mingfu Liang
The text was updated successfully, but these errors were encountered:
Thanks for your interest. It takes me more than 2 days to train the model with 4 RTX2080Ti GPUs and around 1s for each batch. The model also converges with less iterations like 100000, but more iterations will slightly improve the performance.
The data augmentations do require much CPU time, but a 512x512 autoencoder is also not very efficient. In our training, the data loader did not introduce a very obvious latency in the GPU training.
hi Yaoyi,
Thank you for your great work!
I am trying to reproduce your work. But when I train the model, I found that it takes too much time each batch. you mentioned in your experiment , it is about 1s for each batch and 1min each batch for me. I just followed your code. I only use one gpu p100 and batch size is 8. Do you know what's the reason.
Hi Yaoyi,
Thank you for your great work and I hope to discuss it with you in AAAI-20 Technical Program on Feb. if applicable!
One question is that I am curious about the total time you need for training the whole DIM training set. It seems like you only include the total iterations for the training phase in your paper.
Regards,
Mingfu Liang
The text was updated successfully, but these errors were encountered: