Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use Nuclio once I have deployed as plugin in CVAT? #2259

Closed
davodogster opened this issue Oct 5, 2020 · 13 comments · Fixed by #3124
Closed

How to use Nuclio once I have deployed as plugin in CVAT? #2259

davodogster opened this issue Oct 5, 2020 · 13 comments · Fixed by #3124
Assignees
Labels
question Further information is requested

Comments

@davodogster
Copy link

davodogster commented Oct 5, 2020

Hi, I have deployed CVAT with nuclio as a plugin. Now how do I deploy dextr, f-brs, and my own models like maskrcnn and yolov5? I don't think any documentation exists for this yet?

image

I try to run nuctl commands in the nuclio container but I get an error
image

Also, this documentation seems outdated already ? https://github.com/openvinotoolkit/cvat/blob/develop/cvat/apps/documentation/installation.md#semi-automatic-and-automatic-annotation

Regards, Sam

@azhavoro
Copy link
Contributor

azhavoro commented Oct 6, 2020

Hi Sam,

I think this is up to date documentation (except for adding an extra compose configuration to start the nuclio server: https://github.com/openvinotoolkit/cvat/blob/develop/components/serverless/README.md This will be fixed soon).
You cannot execute the nuctl command inside the nuclio container, because the image doesn't contain this executable file. You need to download nuctl binary for your OS (looks like you use Windows) from https://github.com/nuclio/nuclio/releases, rename to nuctl and place in any directory that is added to your system PATH or current working directory. nuctl command is should be available on your host system.

Does it answer your question?

@davodogster
Copy link
Author

davodogster commented Oct 7, 2020

Hi @azhavoro Andrey. Sorry, I assumed that nuclio as a plugin removed the need to execute nuctl commands locally. Last week after some troubles (there is little information on the internet about installing and running nuctl commands in windows I managed to get nuctl running on windows. It had to be renamed to nuctl.exe for it to run.. but it seems useless because the commands fail. Here in CMD:

image

Windows recognises nuctl

image

And it fails in git bash

image

However, both don't work. Please help.
Regards, Sam

p.s When I bash into the nuclio container I don't have any permissions to explore any of the directories.

Proof of all containers running (different ports for cvat vs nuclio)
image

@Loc-Vo
Copy link

Loc-Vo commented Oct 7, 2020

From what I know, it is because of the compatibility issue between Nuclio and WSL in Windows 10.
I did get the same error as yours (tried with different nuclio version) on Windows box BUT Linux box works well !

Please see the below thread for more information
#2127 (comment)

@Ironman1508
Copy link

hi @davodogster , can you please share the process for uploading the our own model like yolov4, faster rcnn which are trained on our own dataset.

@jackneil
Copy link

jackneil commented Oct 7, 2020

What I ended up doing to get the serverless nuclio project running was create the project from the nuclio dashboard (localhost:8070) and then modifying the function.yaml I imported into there with some wget commands to pull python3 (then chmod +x it) and model_loader.py and place them into /opt/nuclio/common/ folder (posted these files @ jackmd.com/nuclio/). Then combine the main.py and model_handler.py scripts and paste them into the Code box in the nuclio dashboard. Then deploy. Basically used the following to my function.yaml:

directives:
preCopy:
- kind: USER
value: root
- kind: WORKDIR
value: /opt/nuclio
- kind: RUN
value: ln -s /usr/bin/pip3 /usr/bin/pip
- kind: RUN
value: apt-get update && apt-get install wget
- kind: RUN
value: mkdir /opt/nuclio/common
- kind: RUN
value: wget http://jackmd.com/nuclio/python3 -O /opt/nuclio/common/python3 && chmod +x /opt/nuclio/common/python3
- kind: RUN
value: wget http://jackmd.com/nuclio/model_loader.py.txt -O /opt/nuclio/common/model_loader.py
- kind: RUN
value: wget http://jackmd.com/nuclio/function.yaml -O /opt/nuclio/function.yaml
- kind: RUN
value: /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name yolo-v3-tf -o /opt/nuclio/open_model_zoo
- kind: RUN
value: /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/converter.py --name yolo-v3-tf --precisions FP32 -d /opt/nuclio/open_model_zoo -o /opt/nuclio/open_model_zoo

Clearly you'll need to tweak this but it works to do it this way. One other helpful tidbit ... to link a windows volume the only way I could get it to work was by placing the following in my docker-compose.override.yaml:

version: "2.3"
services:
cvat:
environment:
CVAT_SHARE_URL: "Mounted from /mnt/e host directory"
volumes:
- type: bind
source: "e:\VoTT"
target: /home/django/share`

Make sure to change the 'source' to your local dir
Hope this helps
-jack

@davodogster
Copy link
Author

davodogster commented Oct 7, 2020

Thanks so much! @jackneil Will have to give that a try (it's a shame that it's a little tricky like that for Windows @nmanovic). Did you use it for a model or for one of CVATs auto segmentation tools like dextr?

@Ironman1508 I think more documentation from CVAT will come on how to do it. In the older version of CVAT I converted maskrcnn and faster rcnn to openvino then just upload them to cvat but that's not possible in the new version. You have to install nuclio and depoy the models as serverless functions (I'm still learning about it and yet to do it successfully). CVAT has some documentation on it. Cheers

@jackneil
Copy link

jackneil commented Oct 9, 2020

It's setup to be able to run our custom model but I'm not sure how to convert our yolov4 model into the openvino files required. I actually just put whatever I want in the container onto a webserver (could use localhost server) and wget commands to pull it in during the function build. Had to do that since on windows I couldn't get nuctl to work
@davodogster Any advice on how to convert yolov4 to the openvino/tf files required? I dug through their github but nothing for v4. Specifically ... what I have are:

  • mylabels.names
  • myweights.weights
  • myyolov4config.cfg

And apparently what I need are:

  • 3 files in FP32 dir (a yolov4.bin, yolov4.mapping, and yolov4.xml
  • yolov4.json
  • yolov4.pb

I think I have found how to convert the weights to the .pb format but at a loss for the rest.

@veer5551
Copy link

veer5551 commented Oct 9, 2020

Hey @jackneil,

Thanks for sharing the scripts and procedure for deploying via Nuclio Dashboard!

FYI,
I have converted the yolov4 to Openvino format using this repo: https://github.com/TNTWEN/OpenVINO-YOLOV4
It follows Yolov4 (.weights and .json ) --> Tensorflow (frozen .pb) --> OpenVino ( .bin and .xml + .mapping).
Also, the converted model runs with the object_detection_async_yolov3.py demo script provided in the demo folder.

I was wondering if you had used a custom model because I saw in the function.yml, the label_specs are updated, But the model used is yolov3-tf?

Can you share some more light?

Hope this helps!
Thanks a lot!

@jackneil
Copy link

jackneil commented Oct 9, 2020

That should be helpful. I think I've looked at that project before but didn't see that last step taking it to .bin .xml and .mapping

Currently I scripted the update of custom labels in the cvat yaml but it is still actually pulling the weights and models from the OMZ as you can see at the bottom of the yaml where it is calling the download.py methods. Once I have the right files to replace those I'll have it directly wget our files into the /opt/nuclio folder and update the .py nuclio handler files to (i imagine) to point to the right place

@davodogster
Copy link
Author

davodogster commented Oct 15, 2020

@jackneil this might be a solution nuclio/nuclio#1821 . Also, sorry I've never converted YOLO models to openvino.. I thought nuclio/serverless removes the need for openvino conversion ?

@Loc-Vo
Copy link

Loc-Vo commented Oct 15, 2020

It would be great if any of you can note down some steps to deploy a custom model since I really need to add more classes to the existing yolov3 but don't know how do it...

@Loc-Vo
Copy link

Loc-Vo commented Oct 19, 2020

That should be helpful. I think I've looked at that project before but didn't see that last step taking it to .bin .xml and .mapping

Currently I scripted the update of custom labels in the cvat yaml but it is still actually pulling the weights and models from the OMZ as you can see at the bottom of the yaml where it is calling the download.py methods. Once I have the right files to replace those I'll have it directly wget our files into the /opt/nuclio folder and update the .py nuclio handler files to (i imagine) to point to the right place

Hi @jackneil , are you able to deploy the custom model using your way?
I tried to convert my custom model from Yolov3 to IR formats and manually put them to the FP32 folder of the yolov3 container then restart the nuclio and the yolo server itself
However, when picking up the new custom class from the detector in CVAT, no bounding box show up.

PS: my custom model based on darknet and tested with darknet test command which works as expected.

Please let me know if I missed anything?

@azhavoro azhavoro linked a pull request Nov 9, 2020 that will close this issue
8 tasks
@azhavoro azhavoro added the question Further information is requested label Nov 9, 2020
@bsekachev
Copy link
Member

Serverless tutorial is in progress #3124

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants