Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference model trained on Multiple GPU #21

Open
veya2ztn opened this issue Nov 26, 2018 · 1 comment
Open

Inference model trained on Multiple GPU #21

veya2ztn opened this issue Nov 26, 2018 · 1 comment

Comments

@veya2ztn
Copy link

veya2ztn commented Nov 26, 2018

For multiple GPU train, the saved model ( in .meta file) is split to two clones.
So all the tensor names are changed by default like
source_ph --> clone_0/ source_ph
custom_generated_t_style_source --> clone_0/custom_generated_t_style_source
So, if anyone want to eval or inference on its own single GPU machine, please be careful when the pre_trained model is base on multiple GPU.
I recommend use this inference code

python inference/image_translation_infer.py \
--model_path="/PATH/TO/CHECHPOINT" \
--image_hw=128 \
--input_tensor_name="clone_0/sources_ph" \
--output_tensor_name="clone_0/custom_generated_t_style_source" \
--input_image_path="/PATH/TO/INPUT" \
--output_image_path="/PATH/TO/OUTPUT" \

And by the way, I am wondering why the inference speed is soooooo slow. Loading the weight take 5-10 second on my 1080Ti

@jerryli27
Copy link
Owner

Thank you for giving pointers to changes during multi-gpu inference!
I think the speed to load weights sounds acceptable given model size and hdd data reading speed. If you want faster inference, you can change the code to do batching.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants