You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For multiple GPU train, the saved model ( in .meta file) is split to two clones.
So all the tensor names are changed by default like
source_ph --> clone_0/ source_ph
custom_generated_t_style_source --> clone_0/custom_generated_t_style_source
So, if anyone want to eval or inference on its own single GPU machine, please be careful when the pre_trained model is base on multiple GPU.
I recommend use this inference code
Thank you for giving pointers to changes during multi-gpu inference!
I think the speed to load weights sounds acceptable given model size and hdd data reading speed. If you want faster inference, you can change the code to do batching.
For multiple GPU train, the saved model ( in .meta file) is split to two clones.
So all the tensor names are changed by default like
source_ph --> clone_0/ source_ph
custom_generated_t_style_source --> clone_0/custom_generated_t_style_source
So, if anyone want to eval or inference on its own single GPU machine, please be careful when the pre_trained model is base on multiple GPU.
I recommend use this inference code
And by the way, I am wondering why the inference speed is soooooo slow. Loading the weight take 5-10 second on my 1080Ti
The text was updated successfully, but these errors were encountered: