Releases: laugh12321/TensorRT-YOLO
Releases · laugh12321/TensorRT-YOLO
TensorRT YOLO v4.0 - Release Notes
Breaking Changes
- Add Dockerfile for building project Docker image (73060db)
- Add support for YOLOv3 and Ultralytics model export (8f2af94)
- Added Support for YOLOv10 (62071cb)
- Refactor deploy library (8a4df33)
- Use CUDA Graph to Accelerate Static Model Inference (3576c78)
- Refactor: Add BaseDet base class, refactor DeployDet and DeployCGDet (416e77b)
- Add streaming video support using VideoPipe (#17, #19)
- feat: major update with pybind11 integration and new 4.0 tag (3244eea)
Bug Fixes
- Fix incorrect time interval calculation (a3dee2a)
- Fix: Include
<cstring>
to resolve "memcpy is not a member of std" error in Linux (1b763d9) - Update detection output variable names for clarity (fe67b01)
- Add
cstdint
to resolve Linux compilation issue (f602dc4) - Fix: Graph input and output tensors must include dtype information. (2cacb7d)
Full Changelog: v3.0...v4.0
TensorRT YOLO v3.0 - Release Notes
Breaking Changes
- Add TensorRT INT8 PTQ support (87f67ff)
- Add C++ inference implementation (0f3069f)
- Implemented parallel preprocessing with multiple streams (86d6175)
- Refactor C++ inference code to support dynamic and static libraries (425a1a4)
- Refactored Python code related to TensorRT-YOLO and packaged it as tensorrt_yolo. (a10ebc8)
Bug Fixes
- Fix batch visualize bug (9125219)
- Remove deleted move constructor and move assignment operator (e287342)
- Fix duplicate imports (1237e21)
- Fix bug (24ea950)
Full Changelog: v2.0...v3.0
TensorRT YOLO v2.0 - Release Notes
Breaking Changes
- Implement YOLOv9 Export to ONNX and TensorRT with EfficientNMS Plugin (249bfab)
- Remove FLOAT16 ONNX export and add support for Dynamic Shape export (9ec1f29)
- Enable dynamic shape inference with CUDA Python and TensorRT 8.6.1 for inference (3286450)
Bug Fixes
Full Changelog: v1.0...v2.0
TensorRT YOLO v1.0 - Release Notes
Breaking Changes
- Supports FLOAT32, FLOAT16 ONNX export, and TensorRT inference
- Supports YOLOv5, YOLOv8, PP-YOLOE, and PP-YOLOE+
- Integrates EfficientNMS TensorRT plugin for accelerated post-processing
- Utilizes CUDA kernel functions to accelerate preprocessing
- Supports Python inference
Bug Fixes
- Fix pycuda.driver.CompileError on Jetson (#1)
- Fix Engine Deserialization Failed using YOLOv8 Exported Engine (#2)
- Fix Precision Anomalies in YOLOv8 FP16 Engine (#3)
- Fix YOLOv8 EfficientNMS output shape abnormality (0e542ee)
- Fix trtexec Conversion Failure for YOLOv5 and YOLOv8 ONNX Models on Linux) (#4)
- Fix Inference Anomaly Caused by preprocess.cu on Linux (#5)
Full Changelog: https://github.com/laugh12321/TensorRT-YOLO/commits/v1.0