Releases: neo-ai/tvm
Releases · neo-ai/tvm
neo-ai TVM 1.13.0
neo-ai TVM 1.12.0
neo-ai TVM 1.10.1
neo-ai TVM 1.10.1
TVM for DLR 1.10.0
This release can be used to compile models which are compatible with DLR release 1.10.0.
Notable changes
- Add support for TF2 frontend parser
- Add support for KERAS dense 3D inputs and nested model
- Bug fixes for TRT memory leak, TRT performance, weights doubled after TRT compilation
- Bug fix for RelayVM fails on Windows
- Upgrade TreeLite to version 1.2.0
TVM for DLR 1.9.0
This release can be used to compile models which are compatible with DLR release 1.9.0.
Notable changes
- Bug fixes for CUDA NMS implementation affecting MXNet models.
- Fix RelayVM for 32 bit platforms
- Add support for TF2.x
TVM for DLR 1.8.0
This release can be used to compile models which are compatible with DLR release 1.8.0.
Notable changes
- Enables better performance on PyTorch object detection and TensorFlow object detection models on GPU.
TVM for DLR 1.7.0
This release can be used to compile models which are compatible with DLR release 1.7.0.
Notable changes
- Support Loading of Multiple DLR
- Supports PyTorch object detection models like mask_rcnn_resnet on GPU.
- Bug Fix for PyTorch 1.5.1
- Add support for YOLO v5 in PyTorch.
TVM for DLR 1.6.0
This release can be used to compile models which are compatible with DLR release 1.6.0.
Notable changes
- Bug Fixes.
- Supports Pytorch object detection models like mask_rcnn_resnet on CPU. Additional TensorFlow object detection models on GPU are supported like ssd mobilenet,mask_rcnn_resnet, faster_rcnn_resnet, etc.
TVM for DLR 1.4.0
This release can be used to compile models which are compatible with DLR release 1.4.0.
Notable changes
- Bug fixes
- Support for Pytorch object detection models for CPU.
- Support for TensorFlow object detection models for CPU.
TVM for DLR 1.5.0
This release can be used to compile models which are compatible with DLR release 1.5.0.
Notable changes
- Support for TensorFlow SSD models on GPU targets by using CPU RelayVM (target "llvm") in combination with TensorRT BYOC.