-
Notifications
You must be signed in to change notification settings - Fork 544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No importer registered for op: NonZero #401
Comments
Same problem here when trying to run inference using: sudo docker run --gpus '"device=0"' --rm -p8000:8000 --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 --net inference_network --network-alias=trt_server -v/home/fperez/dev/models/tensorrt:/models nvcr.io/nvidia/tensorrt:20.03-py3 giexec --onnx=/models/bert-onnx/test/model.onnx --device=0 --verbose Error:
In the onnx graph:
Are there plans to support |
+, the same problem. Need NonZero support. |
Really need this operation! Any progress with this issue? |
Up? For now no one model from TF OD API can be loaded because of this node. |
Really need this operation! |
This is a problem for me as well. This occurs in TensorFlow when using Here is a reproducible example for building an onnx model:
The error can be found using |
Really need this operation, too! |
Same issue. Wonder are there any practical reasons that this OP should not be included in TensorRT? |
Same issue. onnx model generate under enviroment config as below: any update would be appericated. |
We currently do not support the NonZero operator, which is why you are seeing this error. We have plans to support this in a future release. |
Hi @kevinch-nv, I need this operator as well, but since TensorRT needs fixed-size ops, how will you do it? |
@kevinch-nv Is there update with this? |
What about x > 0 or x < 0? |
Is there an alternative way to achieve the same result as this op? |
Stuck with the same problem... |
Hi everyone, I am facing the same problem with my model. I am in the situation described by @gabrielibagon, the tf.where operators are converted to NonZero operators by ONNX. Have you find some ideas to work around the issue ? On my side, i have tried something with the ONNX API in my python code. I iterate over the nodes of the graph and if the node belong to the NonZero type i put it back to Where operator which is supported by TensorRT:
Then i try to re-export to TensorRT, I have a new error: I assume the format of the input is a problem for the Where Operator. It's only a possibility to go further with. Maybe it's possible to add One intermediate operator like "Equal" to obtain a "good" input for the condition field. Another way, is to develop a plugin, but from the examples i saw on the web, the object IPluginCreator is developed in C++ and then the python API is used to create plugin from it. @kevinch-nv Do you know if it is possible to do everything with Python API ?
If you have any other suggestion, feel free to answer and share what you have tried to do. Thanks in advance for your help. |
+1 Same problem. Is there any workaround? |
+1 Waiting for the workaround... Or only we can change the network architecture? |
+1 Looking forward for the feature |
I believe that trying to get TensorRT to use a plugin that implements |
Is there any solutions that |
Any updates to |
can |
Need this as well. |
A workaround that worked for me was to change the source code of the model I was exporting. In my case, |
I need this too. Trying to make a workaround for TensorRT to accept uint8 input. |
I need this too .. |
+1 , need this operation too |
@kevinch-nv could you please release a support for NonZero operation soon. |
+1, would be really nice to have! :) |
It is already on the roadmap and a work in progress. It is a challenging feature for TRT because the output data shape depends on input values, which violates TRT's assumption that output shapes are known before the inference starts. However, we are trying out best to break through the difficulty and get this feature shipped. Please stay tuned :) For now, I would suggest that you look at the model to figure out whether NonZero is really needed. In many cases, NonZero is just a by-product of ops that could have been implemented with other simpler ops and can be worked around with onnx graphsurgeon by replacing the ops. |
+1, the issue is long time. |
Euh, very good, but can you not give an example of how to do it? |
can you tell me a clear explanation of why the three parameters of "where" won't cause onnx to produce NonZero.thanks. |
beacuse the shape is fixed. |
Can someone share please an Onnx graph surgeon code example which replace an Onnx NonZero layer with other layers combination that provide the same logic? |
NonZero is natively supported now for versions >= 8.5. I recommend using the latest version of 10.X TensorRT and importing your model with that version. If there's still issues with non-zero ops in later versions, please open a new issue. |
Thanks @kevinch-nv , Unfortunately I limited to old TRT version 8.2 on my TX2 Jetson. NonZero is not supported by this TRT old version so I must to manipulate the original Onnx. If there is no "magic " plugin I will have to divide the model to several parts by extracting the NonZero outside and implement it externally. So I just want to know if there is anyone that already have any kind of solution that can share it. Thanks, |
NonZero is a tricky op since the output shape is dependent on the values in the input data. Prior to TensorRT 8.5, there is no native nor plugin way to have TensorRT correctly understand and allocate these output shapes. If you must use TensorRT 8.2, then NonZero must be done external to TensorRT. |
When I'm trying to import model from ONNX file, I'm getting:
How NonZero can be replaced or workarounded?
The text was updated successfully, but these errors were encountered: