Apr'19
This repo is stil under development
This repo aims to provide step-by-step setup of OpenVINO in Ubuntu:16.04 docker environment. OpenVINO provides many examples but the documentation, IMHO, provides scattered steps and many branching due to support of many compute devices. This repo targets only CPU with Integrated Graphics and the USB Neural Compute Stick. We hope to capture the steps in a single readme and in a linear fashion, i.e. set up OpenVINO, configure Model Optimizer, download models in various framework and run some of out-of-the-box samples with Inference Engine.
- Skylake Core i7-6770HQ, Iris Pro Graphics 580 (Skull Canyon NUC)
- Kabylake Core i7-7500U, HD Graphics 620
- Neural Compute Stick
Your contribution to this list is very much appreciated! Please add your processor if you have successfully complete the whole process on your system.
The approach we are taking here is a hybrid of interactive docker build and the "Dockerfile" way due to the challenge of storing large prebuilt binary installation archive in git. Yes, I am avoiding git-lfs. On a side note, you could try to build OpenVINO from source.
-
Download the OpenVINO installation archive for linux and OpenCL Drivers and Runtime
# OpenVINO R5.0.1 for linux mkdir ~/openvino-setup && cd ~/openvino-setup mv ~/Downloads/l_openvino_toolkit_p_2018.5.455.tgz . tar -zxf l_openvino_toolkit_p_2018.5.455.tgz # OpenCL Release 19.01.12103 mkdir -p ~/openvino-setup/neo && cd ~/openvino-setup/neo wget https://github.com/intel/compute-runtime/releases/download/19.01.12103/intel-gmmlib_18.4.0.348_amd64.deb wget https://github.com/intel/compute-runtime/releases/download/19.01.12103/intel-igc-core_18.50.1270_amd64.deb wget https://github.com/intel/compute-runtime/releases/download/19.01.12103/intel-igc-opencl_18.50.1270_amd64.deb wget https://github.com/intel/compute-runtime/releases/download/19.01.12103/intel-opencl_19.01.12103_amd64.deb
-
In order to use the integrated graphics, graphics drivers, libVA and dependencies are required. We will build a docker base image that has these dependencies installed. This step is run on host.
cd ~ git clone https://github.com/vuiseng9/openvino-ubuntu cd openvino-ubuntu/docker ./build_media_docker.sh
-
Instantiate the built
ubuntu-media
containercontainer=ubuntu-media sudo xhost +local:`sudo docker inspect --format='{{ .Config.Hostname }}' $container` sudo docker run \ -e DISPLAY=$DISPLAY \ -v /tmp/.X11-unix:/tmp/.X11-unix \ -v /home/${USER}:/hosthome \ -v /home/${USER}/openvino-setup:/workspace/openvino-setup \ -v /home/${USER}/openvino-ubuntu:/workspace/openvino-ubuntu \ --device=/dev/dri:/dev/dri \ --privileged \ -w /workspace \ -it ${container} bash
-
Install dependencies
apt-get update && \ apt-get install -y \ autoconf git curl vim libdrm-dev libgl1-mesa-glx libgl1-mesa-dev sudo pciutils \ libx11-dev openbox unzip xorg xorg-dev cpio python3 lsb-core yasm clinfo eog
-
Install OpenVINO dependent packages
cd /workspace/openvino-setup/l_openvino_toolkit_p_2018.5.455 ./install_cv_sdk_dependencies.sh
-
Install OpenCL Drivers & Runtime
cd /workspace/openvino-setup/neo dpkg -i *.deb # By now, you should expect to see a list of devices that include iGPU by running "clinfo"
-
Install OpenVINO
cd /workspace/openvino-setup/l_openvino_toolkit_p_2018.5.455 ./install.sh
-
Add OpenVINO environment setup script in .bashrc so that it gets sourced whenever container is instantiated
echo "source /opt/intel/computer_vision_sdk/bin/setupvars.sh" >> ~/.bashrc source ~/.bashrc
-
Set up Model Optimizer
cd /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/install_prerequisites ./install_prerequisites.sh
-
Validate OpenVINO setup, you should expect demo run successfully
cd /opt/intel/computer_vision_sdk/deployment_tools/demo/ ./demo_squeezenet_download_convert_run.sh
If you complete the setup above, Model Optimizer has been configured. This section provides steps to translate DL frameworks models to OpenVINO intermediate representation (IR) format. We only focus Caffe and Tensorflow at the moment.
- Download sample models, the default installation comes with a python script to download popular topologies but mostly Caffe models. We provides scripts to download Tensorflow models. Do note that we store the models on the host as they are large in size (tens of GB in total).
cd /workspace && mkdir -p /hosthome/openvino-models && ln -sv /hosthome/openvino-models . # Run OpenVINO downloader $INTEL_CVSDK_DIR/deployment_tools/model_downloader/downloader.py --all -o /hosthome/openvino-models # Download Frozen Tensorflow Models (object detection and quantized) cd /workspace/openvino-ubuntu/scripts/ ./dl-tf-obj-det-frozen-mdl.sh ./dl-tf-quant-frozen-mdl.sh # ./dl-tfslim-mdl.sh
- Some of the downloaded models are already in IR format. We will convert the rest of them to IR.
# Caffe cd /workspace/openvino-ubuntu ./scripts/run_mo_caffe.sh 2>&1 | tee log.run_mo_caffe # Tensorflow - only few models at the moment cd /workspace/openvino-ubuntu ./scripts/mo/run_mo_tf-obj-det.sh 2>&1 | tee log.run_mo_tf-obj-det
Since figuring out the input arguments could be challenging due to many combinations and many models, we provide the runnable CLI in form of bash script. You just need to execute the scripts. Demos/samples provide text output to console or generate output picture(s).
- Samples
cd /workspace/openvino-ubuntu/scripts/samples
# Run any script here
- Demos
cd /workspace/openvino-ubuntu/scripts/demos
# Run any script here