Skip to content

An EdgeX device service leveraging OVMS (OpenVINO Model Server) for AI inference.

License

Notifications You must be signed in to change notification settings

edgexfoundry-holding/device-ai-openvino-ovms

Repository files navigation

Overview

This is a demo for OpenVINO Model Server powered by EdgeX device service.

Prerequisites

EdgeX

Third party

  • OpenVINO is a toolkit for neural network optimization for Intel® hardware.
  • OpenVINO Model Server is a model serving framework for OpenVINO™ toolkit.
  • Gocv is a Go package for computer vision using OpenCV 4 and beyond.

Documentation

For latest documentation please visit https://docs.edgexfoundry.org/

Features

  • An Object Detection model (ssdlite_mobilenet_v2) embedded in a demo device service
  • Support multiple models. If other models share the same input and output format as this one, you can use this demo to perform inference on them. See Model Metadata for more details.
  • Support multiple inference devices(CPU, GPU, NPU) by intel
  • Support multiple devices in one device service
  • Support local model server and remote model server, specified by device protocol

Usage

Install deps

OpenVINO Model Server (OVMS)

Step 1: Prepare Docker

Please refer the OVMS Quickstart Guide

Step 2: Provide a Model

This demo uses ssdlite_mobilenet_v2 model.

model
└── 1
    ├── coco_91cl_bkgr.txt
    ├── ssdlite_mobilenet_v2.bin
    ├── ssdlite_mobilenet_v2.mapping
    └── ssdlite_mobilenet_v2.xml

Step 3: Start the Model Server Container

Start the container:

  • using CPU for inference
docker run -d -u $(id -u) --rm \
-v ${PWD}/model:/model \
-p 9000:9000 -p 8000:8000 \
openvino/model_server:latest \
--model_name ssd \
--model_path /model \
--port 9000 \
--rest_port 8000
  • using GPU for inference
docker run -d -u $(id -u) --rm \
--privileged \
-v ${PWD}/model:/model -v /dev/dri:/dev/dri \
-p 9000:9000 -p 8000:8000 \
openvino/model_server:latest \
--model_name ssd \
--model_path /model \
--port 9000 \
--rest_port 8000 \
--target_device GPU

Step 4: Check OVMS running

curl http://localhost:8000/v1/config
{
  "faster_rcnn": {
    "model_version_status": [
      {
        "version": "1",
        "state": "AVAILABLE",
        "status": {
          "error_code": "OK",
          "error_message": "OK"
        }
      }
    ]
  }
}
curl http://localhost:8000/v1/models/ssd/metadata
{
  "modelSpec": {
    "name": "ssd",
    "signatureName": "",
    "version": "1"
  },
  "metadata": {
    "signature_def": {
      "@type": "type.googleapis.com/tensorflow.serving.SignatureDefMap",
      "signatureDef": {
        "serving_default": {
          "inputs": {
            "image_tensor": {
              "dtype": "DT_UINT8",
              "tensorShape": {
                "dim": [
                  {
                    "size": "1",
                    "name": ""
                  },
                  {
                    "size": "300",
                    "name": ""
                  },
                  {
                    "size": "300",
                    "name": ""
                  },
                  {
                    "size": "3",
                    "name": ""
                  }
                ],
                "unknownRank": false
              },
              "name": "image_tensor"
            }
          },
          "outputs": {
            "detection_boxes": {
              "dtype": "DT_FLOAT",
              "tensorShape": {
                "dim": [
                  {
                    "size": "1",
                    "name": ""
                  },
                  {
                    "size": "1",
                    "name": ""
                  },
                  {
                    "size": "100",
                    "name": ""
                  },
                  {
                    "size": "7",
                    "name": ""
                  }
                ],
                "unknownRank": false
              },
              "name": "detection_boxes"
            }
          },
          "methodName": "",
          "defaults": {

          }
        }
      }
    }
  }
}

Install Gocv

Install Gocv and its dependencies.

git clone https://github.com/hybridgroup/gocv.git
cd gocv
make install

Build and Run the demo

Build the demo

make build

Run the demo

make run

Result preview

There is an live link in the demo device service that you can use to check the inference result online.

The link format is: http://[hostname]:18080/[device-name].mjpeg, such as http://localhost:18080/Simple-OpenVINO-Device.mjpeg in this demo.

  • Snapshot:

result

  • Video:

Inference Reuslt Video Clip

Reference

License

Apache-2.0

About

An EdgeX device service leveraging OVMS (OpenVINO Model Server) for AI inference.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •