-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accessing Resize op (ResizeNearestNeighbor) in QNN #22549
Comments
We do map Onnx Resize op to ResizeNearestNeighbor for some cases: onnxruntime/onnxruntime/core/providers/qnn/builder/opbuilder/resize_op_builder.cc Lines 276 to 281 in fc2be09
Could you try the latest code with latest QNN 2.27? |
Thanks for your reply @HectorSVC....
I note that mapping is guarded by Should I try to remove that guard, or is there some way to get the mapping to work for the GPU backend? Thank you.
I will also give that a try — but it looks like we need to get the mapping to work either way, correct? |
@HectorSVC When I download QNN from https://www.qualcomm.com/developer/software/neural-processing-sdk-for-ai I get |
We have a QNN download link from our webpage: https://onnxruntime.ai/docs/execution-providers/QNN-ExecutionProvider.html We don't support GPU backend for now since we haven't seen a strong desire for that. Please let us know your concern, you usage scenarios. |
When I click that link I see "no releases available" and "no documents found" (see screen capture below). Is there some special permissions I would need to be able to see QNN 2.27?
It's surprising there's not a strong desire for GPU support. We are very interested in this, as we have customers who like the price point of running custom trained computer vision/deep learning networks on Qualcomm hardware, instead of the competition. These customers have hundreds/thousands of sites with only 1-3 cameras each, so larger edge devices from the competition are overkill/overpriced. However, our customers would like to run the same models we currently offer on other edge devices. This is why we are interested in using the GPU if possible — ideally without modifying the models (as we'd have to for the DSP/HTP).
If I remove
Thank you. (Screen shot showing no access to QNN 2.27 from the shared link:) |
Describe the issue
Currently the
Resize
operation is not supported by QNN on GPU, butResizeNearestNeighbor
andResizeBilinear
are supported (see attached image and https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/SupportedOps.html).Is there a way for an Onnx model leverage those supported ops like
ResizeNearestNeighbor
? So far as I can tell, there's no such thing as aResizeNearestNeighbor
ONNX operator, according to the list here: https://onnx.ai/onnx/operators/I'm trying to run a YOLO model on a QCS6490 GPU, but it contains a couple of Resize nodes which are not supported:
I would like to replace those
Resize
nodes withResizeNearestNeighbor
if that would allow us to run.I'm running with
onnxruntime-rel-1.18.2
and QAIRT / QNN SDK v2.22.6.240515.zip, which is a working combination apart from the lack ofResize
node support.Thank you.
To reproduce
Urgency
No response
Platform
Linux
OS Version
Linux qcs6490-odk 5.4.233-perf #1 SMP PREEMPT Wed Apr 3 03:19:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
rel-1.18.2 commit 9691af1
ONNX Runtime API
Python
Architecture
ARM64
Execution Provider
SNPE
Execution Provider Library Version
QAIRT QNN 2.22.6.240515
The text was updated successfully, but these errors were encountered: