Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pytorch] Add system property to config GraphExecutorOptimize #2156

Merged
merged 9 commits into from
Nov 18, 2022

Conversation

KexinFeng
Copy link
Contributor

@KexinFeng KexinFeng commented Nov 16, 2022

Add system property to config GraphExecutorOptimize. If true, during inference, an initial warm up in the first inference will be needed, which cost ~3.6s latency. With GraphExecutorOptimize=True, the rest of the latencies are reduced.

This seems to have apparent effect on resnet18 on gpu, while other models are not much affected.

This utilizes setGraphExecutorOptimize relavant PR #904
Solves the issue #2151
#2153

Update:
Tested in gpu docker image.

@KexinFeng KexinFeng marked this pull request as ready for review November 16, 2022 16:16
@codecov-commenter
Copy link

codecov-commenter commented Nov 16, 2022

Codecov Report

Base: 72.08% // Head: 71.52% // Decreases project coverage by -0.56% ⚠️

Coverage data is based on head (3cb8a0e) compared to base (bb5073f).
Patch coverage: 72.07% of modified lines in pull request are covered.

Additional details and impacted files
@@             Coverage Diff              @@
##             master    #2156      +/-   ##
============================================
- Coverage     72.08%   71.52%   -0.57%     
- Complexity     5126     6339    +1213     
============================================
  Files           473      626     +153     
  Lines         21970    27972    +6002     
  Branches       2351     3017     +666     
============================================
+ Hits          15838    20006    +4168     
- Misses         4925     6492    +1567     
- Partials       1207     1474     +267     
Impacted Files Coverage Δ
api/src/main/java/ai/djl/modality/cv/Image.java 69.23% <ø> (-4.11%) ⬇️
...rc/main/java/ai/djl/modality/cv/MultiBoxPrior.java 76.00% <ø> (ø)
...rc/main/java/ai/djl/modality/cv/output/Joints.java 71.42% <ø> (ø)
.../main/java/ai/djl/modality/cv/output/Landmark.java 100.00% <ø> (ø)
...main/java/ai/djl/modality/cv/output/Rectangle.java 72.41% <0.00%> (ø)
...i/djl/modality/cv/translator/BigGANTranslator.java 21.42% <0.00%> (-5.24%) ⬇️
.../modality/cv/translator/ImageFeatureExtractor.java 0.00% <0.00%> (ø)
.../ai/djl/modality/cv/translator/YoloTranslator.java 27.77% <0.00%> (+18.95%) ⬆️
...modality/cv/translator/wrapper/FileTranslator.java 44.44% <ø> (ø)
...y/cv/translator/wrapper/InputStreamTranslator.java 44.44% <ø> (ø)
... and 560 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@KexinFeng KexinFeng changed the title Add flag of keepGraphOptimize Use setGraphExecutorOptimize to control the warm up Nov 17, 2022
@frankfliu frankfliu changed the title Use setGraphExecutorOptimize to control the warm up [pytorch] Add system property to config GraphExecutorOptimize Nov 18, 2022
@frankfliu frankfliu merged commit 28209d6 into deepjavalibrary:master Nov 18, 2022
@KexinFeng KexinFeng deleted the inference_gpu branch March 15, 2023 08:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants