-
Notifications
You must be signed in to change notification settings - Fork 657
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add yolov10 detection node #8753
Comments
@storrrrrrrrm |
If the inference code could be reused from existing Autoware, it shouldn't matter. https://github.com/THU-MIG/yolov10/blob/main/docs/en/integrations/tensorrt.md We will only use the model that is exported to TensorRT format. But if we are going to use the TensorRT inference codes from their repository, we cannot put it under Autoware.Universe. Then we can create a sub-repo under autowarefoundation and add it to |
don't worry, I will just refer to the code logic, and I will implement new node code based on the existing code of autoware. |
@storrrrrrrrm (cc: @mitsudome-r @xmfcx ) |
Morning kminoda san @kminoda Thanks for your remind,I check the original repo of yolo -v10 and found a open discussion about license of the mode discussion link 。 It looks that the author did not give a response yet since the tensorrt-yolov10-node code is totally refactored based on tire4’s tensorrt-yolox , i believe it should follow tire4‘s license like shin san suggested (apache 2.0) discusison link is here Looking forward to your further comment Have a nice day! Xingang |
@liuXinGangChina Thank you for the response.
Yes, I understand this, but my point is that there might be a legal risk if we include the trained model from YOLOv10 repository (which would also be AGPL license) into autoware.universe. Do you have any plans to resolve this issue? Some solutions I can come up with are
@mitsudome-r @xmfcx May I hear your thoughts? |
Thanks for your suggestion, we would prefer solution 2 (keep the repo seperate from autoware ) let's wait for @mitsudome-r and @xmfcx ‘s decision have a nice day! Lucas |
As mentioned in this thread and the PR thread, the inference code is licensed under Apache 2.0, and the PR includes only this code. The model and training code are licensed under AGPL-3.0. We can proceed with merging the PR since it only includes the inference code, which aligns with our project's licensing requirements. To ensure clarity and compliance, we should update the README to specify:
We should also briefly explain the AGPL-3.0 license, particularly its requirement to share the "training source code" when the model is deployed over a network. This will help users understand their responsibilities when using the repository. I don't think we need to take urgent action on preparing a new repository for the time being. The model is not even uploaded to our servers yet to be distributed. @kminoda @liuXinGangChina do you still have any questions or concerns? |
Thanks for your comment, Mr Fatih. @xmfcx Totally agree with you, and what you suggest is what we trying todo in the begining. Have a nice day, kminoda-san and Mr. Fatih Xingang |
Checklist
Description
add a node which use latest yolo10 model to do detection
Purpose
yolo10 is faster and has higher AP on coco compared to previous yolo series models. it may help to imporove detection performance
Possible approaches
https://github.com/THU-MIG/yolov10 refer to above python code, we can implement a node that uses tensorrt for inference.
Definition of done
The text was updated successfully, but these errors were encountered: