-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatic Image Annotation: Steps for custom model deployment #3457
Comments
@vgupta13 , could you please read https://openvinotoolkit.github.io/cvat/docs/manual/advanced/serverless-tutorial/ and clarify what is not clear in the tutorial? We will improve it. |
@nmanovic for example: the changes are required to be made in the function.yaml file when the user wants to symbolically link the local directory path of the custom model instead of downloading it from the internet? Here is the postCopy section from function.yaml that I have configured:
The deployment was successful, but the container is attempting to restart repeatedly. On further investigation at docker level, the log information (see below) indicates that the symbolic link might not be working. l{"datetime": "2021-07-29 17:31:35,325", "level": "error", "message": "Caught unhandled exception while initializing", "with": {"err": "/opt/nuclio/faster_rcnn/frozen_inference_graph.pb; No such file or directory", "traceback": "Traceback (most recent call last):\n File "/opt/nuclio/_nuclio_wrapper.py", line 350, in run_wrapper\n args.trigger_name)\n File "/opt/nuclio/_nuclio_wrapper.py", line 80, in init\n getattr(entrypoint_module, 'init_context')(self._context)\n File "/opt/nuclio/main.py", line 12, in init_context\n model_handler = ModelLoader(model_path)\n File "/opt/nuclio/model_loader.py", line 15, in init\n serialized_graph = fid.read()\n File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/lib/io/file_io.py", line 122, in read\n self._preread_check()\n File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/lib/io/file_io.py", line 84, in _preread_check\n compat.as_bytes(self.__name), 1024 * 512)\ntensorflow.python.framework.errors_impl.NotFoundError: /opt/nuclio/faster_rcnn/frozen_inference_graph.pb; No such file or directory\n", "worker_id": "0"} Could you please help me in configuring the function.yaml file correctly in this case? |
@nmanovic here is the deployment log: Deploying custommodel/faster_rcnn_inception_v2_coco function... |
@nmanovic I learned about the problem in establishing a symbolic link to a local storage inside docker config. My workaround is - putting the model file at cloud storage (e.g., google drive) and get a shareable link, followed by setting directives leveraging wget inside docker config to download and unpack, and then create a symbolic link. However, now I am getting another issue related to the function invocation: Error: Inference status for the task 1 is failed. requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http://nuclio:8070/api/function_invocations I see another thread on this topic, but so far no success. Let me know if you have any leads. PS: As you might be aware that tensorflow deprecated support to the tf.session in tf2.x. It is not trivial now to convert a tf2.x model into a tf1.x frzoen graph. Could you please consider extending the support to tf2.x? |
@vgupta13 , why do you think that it is possible to use tf1 frozen graph only? Basically you can use even your own DL framework inside serverless functions. We don't have any limitations. If you write instructions for your serverless functions correctly, it should work. See troubleshooting section inside serverless tutorial. Probably it will answer on some your questions. Also debugging section can help as well. |
@vgupta13 what are your contents in The error is saying that it can't find |
@JoshChristie as I already mentioned, it was due to the symbolic link directives inside function.yml file. I have resolved this issue. @nmanovic : the HTTPError: 500 pertaining to the function invocation has been resolved after providing a port number explicitly inside the function.yml. It would be really nice if you could add instructions for defining a port inside the function.yml in the serverless-tutorial. |
I want to deploy skeleton type by serverless function, but the label has the svg template, I define the label in function.yaml file. In the annnotations.spec, How can I check how a dictionary has keys and values? (ex: "id": 1, "name":"person",...) |
@OnceUponATimeMathley sorry for being late in responding to your query. Did you manage to find a solution? |
Hello team, I have a doubt on how to create our own function.yaml, main.py and model_loader.py files for a fine tuned object detection model on a custom dataset from the pre-trained model zoo of tensorflow (e.g., ssdmobilenet). Could you please help me in creating these files or share the documentation, if available?
As far as I remember, in the previous version of cvat, it was possible through the use of .bin and .xml files (derived from the openvino model optimizer) along with the label_map.json and interp.py script. But, with the architectural changes (e.g., introduction of nuclio for serverless deployment, etc), the tradional way is deprecated; hence making things difficult for me to grasp.
PS: I have gone through the serverless-tutorial instructions and other related issues, but couldn't get the anwser.
The text was updated successfully, but these errors were encountered: