Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Total time for training the DIM dataset #3

Open
wuyujack opened this issue Jan 18, 2020 · 3 comments
Open

Total time for training the DIM dataset #3

wuyujack opened this issue Jan 18, 2020 · 3 comments

Comments

@wuyujack
Copy link

wuyujack commented Jan 18, 2020

Hi Yaoyi,

Thank you for your great work and I hope to discuss it with you in AAAI-20 Technical Program on Feb. if applicable!

One question is that I am curious about the total time you need for training the whole DIM training set. It seems like you only include the total iterations for the training phase in your paper.

Regards,
Mingfu Liang

@Yaoyi-Li
Copy link
Owner

Thanks for your interest. It takes me more than 2 days to train the model with 4 RTX2080Ti GPUs and around 1s for each batch. The model also converges with less iterations like 100000, but more iterations will slightly improve the performance.

The data augmentations do require much CPU time, but a 512x512 autoencoder is also not very efficient. In our training, the data loader did not introduce a very obvious latency in the GPU training.

@wrrJasmine
Copy link

hi Yaoyi,
Thank you for your great work!
I am trying to reproduce your work. But when I train the model, I found that it takes too much time each batch. you mentioned in your experiment , it is about 1s for each batch and 1min each batch for me. I just followed your code. I only use one gpu p100 and batch size is 8. Do you know what's the reason.

best wishes!

@ucasiggcas
Copy link

hi,dear
how to set the config file?
#25

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants