Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

trt_weight_stripped_engine_enable does not work for all networks/size ranges. #22165

Open
BengtGustafsson opened this issue Sep 20, 2024 · 2 comments
Labels
ep:TensorRT issues related to TensorRT execution provider

Comments

@BengtGustafsson
Copy link
Contributor

Describe the issue

Our test program runs our different AI networks (7 at the moment) and does this for different size ranges. We include the size range in the hash that is part of the cache directory name. We have to have separate directories for each optimization as we use a onnx blob so all your files get the same name (at least with embedded engine enabled). The test program creates 27 different cache directories.

On the second run of the test program we expected all caches to be reused. This happens if we don't use trt_weight_stripped_engine_enable but if we enable it 5 of 27 cache directories are not used. On the second run optimization is redone, and the files in the cache directory are updated but there is still the same re-optimization on succeeding runs.

As we use an onnx blob we set up the same data in the trt_onnx_bytestream and trt_onnx_bytestream_size members on succeding runs, hoping that it will be used from there.

To reproduce

set trt_weight_stripped_engine_enable and create sessions for multiple models into different cache directories. Rerun and check that setup time is not > 1 s. It could be related to having the onnx data as a blob but it seems rather unlikely as most cachings work.

Urgency

No response

Platform

Windows

OS Version

Windows 11

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.19.2

ONNX Runtime API

C++

Architecture

X64

Execution Provider

TensorRT

Execution Provider Library Version

TensorRT 10.4.0.26 on CUDA 11.6

Copy link
Contributor

github-actions bot commented Nov 5, 2024

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

@github-actions github-actions bot added the stale issues that have not been addressed in a while; categorized by a bot label Nov 5, 2024
@BengtGustafsson
Copy link
Contributor Author

This system of making issues stale because you didn't react to them is ridiculous!

@github-actions github-actions bot removed the stale issues that have not been addressed in a while; categorized by a bot label Nov 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:TensorRT issues related to TensorRT execution provider
Projects
None yet
Development

No branches or pull requests

2 participants