Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when converting tensorrt model #444

Open
Yhc-777 opened this issue Sep 20, 2024 · 0 comments
Open

Error when converting tensorrt model #444

Yhc-777 opened this issue Sep 20, 2024 · 0 comments

Comments

@Yhc-777
Copy link

Yhc-777 commented Sep 20, 2024

(yolov10) u@u:/mnt/2/haochen/yolov10$ yolo export model=/mnt/2/haochen/yolov10/runs/detect/train_v10/weights/best.pt format=engine
half=False simplify opset=13 workspace=16
WARNING ⚠️ TensorRT requires GPU export, automatically assigning device=0
Ultralytics YOLOv8.1.34 🚀 Python-3.9.19 torch-2.0.1+cu117 CUDA:0 (NVIDIA GeForce RTX 4080, 16071MiB)
YOLOv10x summary (fused): 503 layers, 31595636 parameters, 0 gradients, 169.8 GFLOPs

PyTorch: starting from '/mnt/2/haochen/yolov10/runs/detect/train_v10/weights/best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 300, 6) (61.1 MB)

ONNX: starting export with onnx 1.14.0 opset 13...
============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

ONNX: simplifying with onnxslim 0.1.31...
ONNX: export success ✅ 4.9s, saved as '/mnt/2/haochen/yolov10/runs/detect/train_v10/weights/best.onnx' (112.5 MB)

TensorRT: starting export with TensorRT 8.6.1...
[09/20/2024-10:59:05] [TRT] [I] [MemUsageChange] Init CUDA: CPU +564, GPU +0, now: CPU 3520, GPU 1140 (MiB)
[09/20/2024-10:59:05] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +433, GPU +104, now: CPU 3972, GPU 1244 (MiB)
[09/20/2024-10:59:05] [TRT] [I] ----------------------------------------------------------------
[09/20/2024-10:59:05] [TRT] [I] Input filename: /mnt/2/haochen/yolov10/runs/detect/train_v10/weights/best.onnx
[09/20/2024-10:59:05] [TRT] [I] ONNX IR version: 0.0.9
[09/20/2024-10:59:05] [TRT] [I] Opset version: 13
[09/20/2024-10:59:05] [TRT] [I] Producer name: pytorch
[09/20/2024-10:59:05] [TRT] [I] Producer version: 2.0.1
[09/20/2024-10:59:05] [TRT] [I] Domain:
[09/20/2024-10:59:05] [TRT] [I] Model version: 0
[09/20/2024-10:59:05] [TRT] [I] Doc string:
[09/20/2024-10:59:05] [TRT] [I] ----------------------------------------------------------------
[09/20/2024-10:59:05] [TRT] [W] onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[09/20/2024-10:59:05] [TRT] [I] No importer registered for op: Mod. Attempting to import as plugin.
[09/20/2024-10:59:05] [TRT] [I] Searching for plugin: Mod, plugin_version: 1, plugin_namespace:
[09/20/2024-10:59:05] [TRT] [E] ModelImporter.cpp:773: While parsing node number 592 [Mod -> "/model.23/Mod_output_0"]:
[09/20/2024-10:59:05] [TRT] [E] ModelImporter.cpp:774: --- Begin node ---
[09/20/2024-10:59:05] [TRT] [E] ModelImporter.cpp:775: input: "/model.23/TopK_1_output_1"
input: "/model.23/Constant_13_output_0"
output: "/model.23/Mod_output_0"
name: "/model.23/Mod"
op_type: "Mod"
attribute {
name: "fmod"
i: 0
type: INT
}

[09/20/2024-10:59:05] [TRT] [E] ModelImporter.cpp:776: --- End node ---
[09/20/2024-10:59:05] [TRT] [E] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:4890 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
TensorRT: export failure ❌ 6.0s: failed to load ONNX file: /mnt/2/haochen/yolov10/runs/detect/train_v10/weights/best.onnx
Traceback (most recent call last):
File "/home/u/miniconda3/envs/yolov10/bin/yolo", line 8, in
sys.exit(entrypoint())
File "/mnt/2/haochen/yolov10/ultralytics/cfg/init.py", line 594, in entrypoint
getattr(model, mode)(**overrides) # default args from model
File "/mnt/2/haochen/yolov10/ultralytics/engine/model.py", line 590, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File "/home/u/miniconda3/envs/yolov10/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/mnt/2/haochen/yolov10/ultralytics/engine/exporter.py", line 288, in call
f[1], _ = self.export_engine()
File "/mnt/2/haochen/yolov10/ultralytics/engine/exporter.py", line 138, in outer_func
raise e
File "/mnt/2/haochen/yolov10/ultralytics/engine/exporter.py", line 133, in outer_func
f, model = inner_func(*args, **kwargs)
File "/mnt/2/haochen/yolov10/ultralytics/engine/exporter.py", line 689, in export_engine
raise RuntimeError(f"failed to load ONNX file: {f_onnx}")
RuntimeError: failed to load ONNX file: /mnt/2/haochen/yolov10/runs/detect/train_v10/weights/best.onnx

How can I solve this problem? Thx!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant