diff --git a/docs/build/eps.md b/docs/build/eps.md index 47e69d22fdc0e..bce2d1defb248 100644 --- a/docs/build/eps.md +++ b/docs/build/eps.md @@ -144,6 +144,8 @@ See more information on the TensorRT Execution Provider [here](../execution-prov Dockerfile instructions are available [here](https://github.com/microsoft/onnxruntime/tree/main/dockerfiles#tensorrt) +**Note** Building with `--use_tensorrt_oss_parser` with TensorRT 8.X requires additional flag --cmake_extra_defines onnxruntime_USE_FULL_PROTOBUF=ON + --- ## NVIDIA Jetson TX1/TX2/Nano/Xavier/Orin diff --git a/docs/execution-providers/TensorRT-ExecutionProvider.md b/docs/execution-providers/TensorRT-ExecutionProvider.md index ab03954d2a4b5..0c853884a4060 100644 --- a/docs/execution-providers/TensorRT-ExecutionProvider.md +++ b/docs/execution-providers/TensorRT-ExecutionProvider.md @@ -824,3 +824,7 @@ This example shows how to run the Faster R-CNN model on TensorRT execution provi ``` Please see [this Notebook](https://github.com/microsoft/onnxruntime/blob/main/docs/python/notebooks/onnx-inference-byoc-gpu-cpu-aks.ipynb) for an example of running a model on GPU using ONNX Runtime through Azure Machine Learning Services. + +## Known Issues +- TensorRT 8.6 built-in parser and TensorRT oss parser behaves differently. Namely built-in parser cannot recognize some custom plugin ops while OSS parser can. See [EfficientNMS_TRT missing attribute class_agnostic w/ TensorRT 8.6 +](https://github.com/microsoft/onnxruntime/issues/16121). \ No newline at end of file