-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improved ONNX support with dynamic shapes #117
Comments
Example usage in python:
wget https://huggingface.co/onnx-community/metric3d-vit-small/resolve/main/onnx/model.onnx
import onnxruntime as ort
import requests
import numpy as np
from PIL import Image
# Load session
ort_session = ort.InferenceSession("./model.onnx", providers=['CPUExecutionProvider'])
# Load image
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
# Predict depth
input = np.array(image).transpose(2, 0, 1)
input = np.expand_dims(input, 0) # Add batch dim
onnxruntime_input = {'pixel_values': input.astype(np.float32)}
pred_depth, pred_normal, normal_confidence = ort_session.run(None, onnxruntime_input)
min_val = pred_depth.min()
max_val = pred_depth.max()
normalized = 255 * ((pred_depth - min_val)/(max_val-min_val))
Image.fromarray(normalized[0].astype(np.uint8)).save('depth.png') |
Hi @xenova , thx for your support. Do you mind joining this project and updating your efforts to our README? |
@YvanYin you're welcome! :) Do you mean submitting a PR? If so, then sure! |
@xenova I invited you. |
@xenova Great work! Using the large model it works great! |
Hi! Great work on both models and export. Smaller ones work great. |
same problem here, can anyone show the right inference script for onnx giant2 ? really appricate |
Hello, I'm using the code that you provided (here) with my own image. I see that the output size is not the same as the input size. Is this ok?
Thank you in advance! |
Hi there! 👋 Following the conversation in #103, I wanted to export the models so they (1) support dynamic shapes and (2) returned the normal information, mainly to run the models with Transformers.js. I got them working, and I've uploaded them to the Hugging Face Hub:
(you can find the .onnx weights - both fp32 and fp16) in the
onnx
subfolder)Feel free to use them yourself or add the links to the README for increased visibility! 🤗 PS: I'd also recommend uploading your original pytorch checkpoints to separate repos (instead of a single repo). Let me know if I can help with any of this!
Regarding the export, there were a few things to consider, mainly fixing the modelling code to avoid python type casts (ensuring the dynamic shapes work during tracing). I also made a few modifications to support CPU exports. Here's my conversion code:
The text was updated successfully, but these errors were encountered: