Intel OpenVINO Installation Guide with AWS Greengrass setting

Intel OpenVINO

Develop applications and solutions that emulate human vision with the Open Visual Inference & Neural Network Optimization (OpenVINO™) toolkit. Based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance.

  • Enables CNN-based deep learning inference on the edge
  • Supports heterogeneous execution across computer vision accelerators—CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA—using a common API
  • Speeds time to market via a library of functions and preoptimized kernels
  • Includes optimized calls for OpenCV and OpenVX*

Here is Intel OpenVINO Installation Guide with AWS Greengrass setting.

Pre-Processing

Prepare :
OS : Ubuntu* 16.04

Model :

Install Openvino :
Refer to this article to install Intel OpenVINO : https://software.intel.com/en-us/articles/OpenVINO-Install-Linux
“opencv-python” need upgrade version to 3.X :

Setting Environment

Command :

Command :

Command :

Setting Model Optimizer And Convert The Model To IR

First set the conversion tool ModelOptimizer: https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer

Model Optimizer uses Python 3.5, whereas Greengrass samples use Python 2.7. In order for Model Optimizer not to influence the global Python configuration, activate a virtual environment as below:

Command :

Command :

Command :

For CPU, models should be converted with data type FP32 and for GPU/FPGA, it should be with <data type> FP16 for the best performance.

For classification using BVLC Alexnet model:
Command :

For object detection using SqueezeNetSSD-5Class model:
Command :

Where <model_location> is the location where the user downloaded the models, <data_type>is FP32 or FP16 depending on target device, and <output_dir> is the directory where the user wants to store the IR. IR contains .xml format corresponding to the network structure and .bin format corresponding to weights. This .xml should be passed to <PARAM_MODEL_XML> mentioned in the Configuring the Lambda Function section. In the BVLC Alexnet model, the prototxt defines the input shape with batch size 10 by default. In order to use any other batch size, the entire input shape needs to be provided as an argument to the model optimizer. For example, if you want to use batch size 1, you can provide --input_shape [1,3,227,227].

Setting AWS Greengrass + Intel OpenVINO Demo

Refer to this setting for Greengrass sample : https://software.intel.com/en-us/articles/OpenVINO-IE-Samples#inpage-nav-16

AWS Greengrass sample is in :
/opt/intel/computer_vision_sdk/inference_engine/samples/python_samples/greengrass_samples/

However, there are some changes in the openvino_toolkit_p_2018.3.343 version of the path that need to be modified (python2):

LD_LIBRARY_PATH :
/opt/intel/computer_vision_sdk/opencv/share/OpenCV/3rdparty/lib:/opt/intel/computer_vision_sdk/opencv/lib:/opt/intel/opencl:/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/external/cldnn/lib:/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64:/opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/model_optimizer_caffe/bin:/opt/intel/computer_vision_sdk/openvx/lib

PYTHONPATH :
/opt/intel/computer_vision_sdk/python/python2.7/ubuntu16/

PARAM_CPU_EXTENSION_PATH :
/opt/intel/computer_vision_sdk/deployment_tools/inference_engine/lib/ubuntu_16.04/intel64/libcpu_extension_avx2.so

Related posts

Leave a Reply