tensorrt_yolo
sample yolov5 model throw error on inference
#1647
Labels
component:perception
Advanced sensor data processing and environment understanding. (auto-assigned)
type:bug
Software flaws or errors.
type:documentation
Creating or refining documentation. (auto-assigned)
Checklist
Description
The tensorrt_yolo package links to a couple of YoloV5 ONNX models. I converted the yolov5l model to .engine and run it, but the node throws error immediately:
compute-sanitizer
reports the illegal memory access comes fromenqueueV2
on line 304. A quick Google around and this is a known old issue with Yolov5 and AutoShape:The issue goes away when I download pre-trained models off PyTorch Hub and convert it to TensorRT using the scripts provided by the ultralytics repo:
Expected behavior
CUDA should not throw error
Actual behavior
CUDA throws illegal memory access error
Steps to reproduce
tensorrt_yolo
to TensorRTtrtexec --onnx=yolov5l.onnx --saveEngine=yolov5l.engine
tensorrt_yolo
nodeVersions
Possible causes
I'm not a pro in TensorRT but here are a couple of potential causes:
.engine
and.onnx
must be generated at the same time using the given method mentioned in AutoShape Usage ultralytics/yolov5#7128:It would be great if someone can explain where the linked model comes from, and update it if necessary.
Also I'm not sure what I'm doing to the linked model is the right way to run inference. It would be great if more documentation could be linked on the model conversions.
Additional context
No response
The text was updated successfully, but these errors were encountered: