-
Notifications
You must be signed in to change notification settings - Fork 543
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gather in Upsample problem #192
Comments
Please help!!! |
I have same issue , please suggest me solution |
any idea to solve it? |
I'm replace UpSample with ConTanspose2D but i don't like such workaround. |
i solve this problem by only exporting the backbone without fpn to onnx. |
@tibistrat you are absolulty magician! Thx a lot! |
@tibistrat Thanks for your explaination. But actually, with this code:
It can not onnx2trt either, it will throw such an error
I have tested with a similar code:
Once I commented out this line, it works. Uncomment, it will raise error. I still don't know why it hits and how to solve it. Any suggestions? |
@jinfagang Have you solved it? |
No, I don't know what's the reason exactly. |
@jinfagang try
|
@jinfagang You know your output exact size, right? You have to enter these exact values, for example: |
@jinfagang Have you solved it ? thanks |
@aidonchuk have you solved it as the @tibistrat saids? why i still have errors? Pls help me , thanks |
It works perfectly, thank you |
@tibistrat Thanks a lot ! |
You may use onnx-simplifier to do the same |
Closing since this has been resolved. |
Hi! Cant export model from onnx to tensorrt.
`----------------------------------------------------------------
Input filename: model.onnx
ONNX IR version: 0.0.4
Opset version: 9
Producer name: pytorch
Producer version: 1.1
Domain:
Model version: 0
Doc string:
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
%206 : Long() = onnx::Constantvalue={2}, scope: ResNet18_OneConvDecoder/DecoderBlock[center]/Sequential[block]/Upsample[0]
Parsing model
WARNING: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Successfully casted down to INT32.
While parsing node number 69 [Gather -> "208"]:
ERROR: /home/alex/tools/onnx-tensorrt/onnx2trt_utils.hpp:335 In function convert_axis:
[8] Assertion failed: axis >= 0 && axis < nbDims
%207 : Tensor = onnx::Shape(%205), scope: ResNet18_OneConvDecoder/DecoderBlock[center]/Sequential[block]/Upsample[0]
%208 : Long() = onnx::Gather[axis=0](%207, %206), scope: ResNet18_OneConvDecoder/DecoderBlock[center]/Sequential[block]/Upsample[0]
%209 : Tensor = onnx::Constantvalue={2}
%210 : Tensor = onnx::Mul(%208, %209)`
The text was updated successfully, but these errors were encountered: