-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
External code during device inference #273
Comments
Hi @PVSemk , Yes, this is now becoming possible using our Gen2 Pipeline Builder (#136). We have an initial example implementation of such logic between networks here: The processing between neural networks is done on the host in the example above to implement the processing/logic between each neural network: So I think this should also be leverage-able for your purposes. We could help you implement it live via our Slack if you are interested (here) We could also probably take a stab at implementing the pipeline on the device if you are comfortable sharing the neural models. Thoughts? Thanks, |
And another option is you could actually use OpenCL to implement that custom logic as its own 'network' that could run as a node in the pipeline builder. In this case then no operations would be done on the host. |
Let me know if you have any additional questions @PVSemk otherwise I'm thinking I'll close this issue soon. Thoughts? Thanks again! |
Hello, I'm instrested in usage of a depth-estimation network, which is written in PyTorch, on your device.
We've faced an issue with an operation (grid_sample) that is not supported by ONNX and OpenVINO Toolkit. We came to the solution of splitting the model into 2 parts (before the operation, after the operation) and launching the external C++ code, implementing the operation, to process the intermediate tensors. Is it possibly to somehow run such a pipeline on your device?
Thanks in advance,
Pavel Semkin
The text was updated successfully, but these errors were encountered: