Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questioning Inference Speed #4

Closed
wdjose opened this issue Apr 10, 2021 · 7 comments
Closed

Questioning Inference Speed #4

wdjose opened this issue Apr 10, 2021 · 7 comments

Comments

@wdjose
Copy link

wdjose commented Apr 10, 2021

Good day,

First of all, congratulations on your work and paper. The idea of separating depth-dominant and color-dominant branches is interesting. Also, thank you for releasing the source code to the public. I have been replicating your code the past few days, and so far inferencing has been straightforward (I am getting RMSE scores at around ~760).

However, correct me if I'm wrong but I think there might be a mistake in the inference time computation. In main.py line 213/216, this is where the predictions are generated from the ENet/PENet models, after which gpu_time is computed. I tried adding a print(pred) function call (see in the image below).
image

I got very different inference times with and without the print(pred) function call. I ran this on a machine with RTX 2080Ti, i7-9700k, CUDA 11.2, torch==1.3.1, torchvision==0.4.2. Below are my runtimes:

image
original code - a bit faster than your official runtime presumably due to my newer CUDA version(?)

image
modified code - much slower when print(pred) was added

My understanding is that calling pred = model(batch_data) does not yet run the model prediction; the model inference only actually runs when you call result.evaluate() in line 268 (i.e. lazy execution):
image

This results in a nearly x10 increase in inference time (i.e. 151ms vs 17ms). Can you confirm that this also happens in your environment?

@JUGGHM
Copy link
Owner

JUGGHM commented Apr 11, 2021

Thanks for your interest! This is a very interesting finding and this does happen in my environment. However, via the following "experiments", my conclusion is that: the surprising latency is owned to the data copy process from GPU to CPU:

(1) Situations when the latency is above 100ms:
(i)
image
(ii)
image
(iii)
image

(2) Situations when the latency is close to those reported in the paper
image

I hope these examples are helpful for your question.

By the way, It shocked me that you successfully trained ENet towards a comparable performance with my trained full model. I wonder if your device configurations and training parameters are available? (Ignore it if it is against your will)

@wdjose
Copy link
Author

wdjose commented Apr 11, 2021

Actually, for the screenshots above, I ran your pre-trained PENet, not ENet 😄 all other parameters are the same: python main.py -b 1 -n pe --evaluate pe.pth.tar. But I also tried it with just ENet, and increased latency was also present.

Regarding the latency: I am not sure if this is due to latency from transferring between CPU and GPU. Actually, the reason why I am very skeptical, is because I actually tried to run this model on the Jetson Nano (vanilla, no modifications):

These were my results:

image
original code

image
modified code - with print(pred) (or just str(pred))

In addition, I was tripping the GPU watchdog timer of the Jetson Nano in the same line 268 where metrics are being computed (which should just be purely CPU operations), instead of in the model inference.

I find it hard to believe that printing prediction tensors in the Jetson Nano takes >10s, while model inference takes just 2s. [Jetson Nano uses unified memory for both CPU and GPU, so no data transfer is needed]. This is why my previous conclusion was that model prediction in PyTorch is done with lazy execution, and will not actually do model prediction unless the prediction tensors are actually already needed. I think the three experiments you showed triggered model execution, while the fourth experiment did not. (but I am not sure)

I'm not sure, what do you think we can do to check if model inference indeed completes with the pred = model(batch_data) function call (without doing any gpu-to-cpu transfer so that it is convincing)? In the meantime I'll try some more experiments to verify.

@wdjose
Copy link
Author

wdjose commented Apr 11, 2021

Okay, I seem to have found something that might work to eagerly perform the model inference: torch.cuda.synchronize()
(Similar model inference time measurement issues from: sacmehta/ESPNet#57 and wutianyiRosun/CGNet#2)

If you replace print(pred) with torch.cuda.synchronize(), the runtimes are the same. I think this is because torch.cuda is asynchronous with pytorch cpu thread, which is why the inference time might have been faster than how long cuda actually finished?

@JUGGHM
Copy link
Owner

JUGGHM commented Apr 11, 2021

This is somehow a problem I wasn't aware of before as I followed the implementation of https://github.com/fangchangma/self-supervised-depth-completion. I think your intuition is right and I will look into it.

@wdjose
Copy link
Author

wdjose commented Apr 11, 2021

Okay, thank you. Let us know how it goes. 😀 In any case, the model performance is still state-of-the-art. My current research direction is actually in running fast depth completion on the edge, which is why I took an interest in your paper. My next experiments will be in trying to minify your network and reduce parameters to run it faster 🙂

@JUGGHM
Copy link
Owner

JUGGHM commented Apr 21, 2021

Proper inference time is reported now at the page of this project. Thanks for your pointing out this problem!

@wdjose
Copy link
Author

wdjose commented Apr 21, 2021

Got it, thank you for updating!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants