Replies: 1 comment
-
It's an inference engine. Given an ONNX model and some inputs, it will execute the graph and returns output tensors. See tttps://onnxruntime.ai/docs/ for more information. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
is ONNX-Runtime an interpreters or compilers?
Beta Was this translation helpful? Give feedback.
All reactions