This is a demo for OpenVINO Model Server powered by EdgeX device service.
EdgeX
- EdgeX Device Service SDK is a Go library for EdgeX device services.
Third party
- OpenVINO is a toolkit for neural network optimization for Intel® hardware.
- OpenVINO Model Server is a model serving framework for OpenVINO™ toolkit.
- Gocv is a Go package for computer vision using OpenCV 4 and beyond.
For latest documentation please visit https://docs.edgexfoundry.org/
- An Object Detection model (ssdlite_mobilenet_v2) embedded in a demo device service
- Support multiple models. If other models share the same input and output format as this one, you can use this demo to perform inference on them. See Model Metadata for more details.
- Support multiple inference devices(CPU, GPU, NPU) by intel
- Support multiple devices in one device service
- Support local model server and remote model server, specified by device protocol
Please refer the OVMS Quickstart Guide
This demo uses ssdlite_mobilenet_v2 model.
model
└── 1
├── coco_91cl_bkgr.txt
├── ssdlite_mobilenet_v2.bin
├── ssdlite_mobilenet_v2.mapping
└── ssdlite_mobilenet_v2.xml
Start the container:
- using CPU for inference
docker run -d -u $(id -u) --rm \
-v ${PWD}/model:/model \
-p 9000:9000 -p 8000:8000 \
openvino/model_server:latest \
--model_name ssd \
--model_path /model \
--port 9000 \
--rest_port 8000
- using GPU for inference
docker run -d -u $(id -u) --rm \
--privileged \
-v ${PWD}/model:/model -v /dev/dri:/dev/dri \
-p 9000:9000 -p 8000:8000 \
openvino/model_server:latest \
--model_name ssd \
--model_path /model \
--port 9000 \
--rest_port 8000 \
--target_device GPU
curl http://localhost:8000/v1/config
{
"faster_rcnn": {
"model_version_status": [
{
"version": "1",
"state": "AVAILABLE",
"status": {
"error_code": "OK",
"error_message": "OK"
}
}
]
}
}
curl http://localhost:8000/v1/models/ssd/metadata
{
"modelSpec": {
"name": "ssd",
"signatureName": "",
"version": "1"
},
"metadata": {
"signature_def": {
"@type": "type.googleapis.com/tensorflow.serving.SignatureDefMap",
"signatureDef": {
"serving_default": {
"inputs": {
"image_tensor": {
"dtype": "DT_UINT8",
"tensorShape": {
"dim": [
{
"size": "1",
"name": ""
},
{
"size": "300",
"name": ""
},
{
"size": "300",
"name": ""
},
{
"size": "3",
"name": ""
}
],
"unknownRank": false
},
"name": "image_tensor"
}
},
"outputs": {
"detection_boxes": {
"dtype": "DT_FLOAT",
"tensorShape": {
"dim": [
{
"size": "1",
"name": ""
},
{
"size": "1",
"name": ""
},
{
"size": "100",
"name": ""
},
{
"size": "7",
"name": ""
}
],
"unknownRank": false
},
"name": "detection_boxes"
}
},
"methodName": "",
"defaults": {
}
}
}
}
}
}
Install Gocv and its dependencies.
git clone https://github.com/hybridgroup/gocv.git
cd gocv
make install
Build the demo
make build
Run the demo
make run
There is an live link in the demo device service that you can use to check the inference result online.
The link format is: http://[hostname]:18080/[device-name].mjpeg, such as http://localhost:18080/Simple-OpenVINO-Device.mjpeg in this demo.
- Snapshot:
- Video: