Intel OpenVINO Installation Guide with AWS Greengrass setting

Intel OpenVINO Installation Guide with AWS Greengrass setting


Intel OpenVINO

Develop applications and solutions that emulate human vision with the Open Visual Inference & Neural Network Optimization (OpenVINO™) toolkit. Based on convolutional neural networks (CNN), the toolkit extends workloads across Intel® hardware and maximizes performance.

  • Enables CNN-based deep learning inference on the edge
  • Supports heterogeneous execution across computer vision accelerators—CPU, GPU, Intel® Movidius™ Neural Compute Stick, and FPGA—using a common API
  • Speeds time to market via a library of functions and preoptimized kernels
  • Includes optimized calls for OpenCV and OpenVX*

Here is Intel OpenVINO Installation Guide with AWS Greengrass setting.


Prepare :
OS : Ubuntu* 16.04

Model :

Install Openvino :
Refer to this article to install Intel OpenVINO :
“opencv-python” need upgrade version to 3.X :

sudo python -m pip install opencv-python

Setting Environment

Command :

source <INSTALL_DIR>/bin/

Command :

cd <INSTALL_DIR>/deployment_tools/model_optimizer/install_prerequisites

Command :

sudo -E ./

Setting Model Optimizer And Convert The Model To IR

First set the conversion tool ModelOptimizer:

Model Optimizer uses Python 3.5, whereas Greengrass samples use Python 2.7. In order for Model Optimizer not to influence the global Python configuration, activate a virtual environment as below:

Command :

sudo ./ venv

Command :

cd <INSTALL_DIR>/deployment_tools/model_optimizer

Command :

source venv/bin/activate

For CPU, models should be converted with data type FP32 and for GPU/FPGA, it should be with <data type> FP16 for the best performance.

For classification using BVLC Alexnet model:
Command :

python --framework caffe --input_model <model_location>/bvlc_alexnet.caffemodel --input_proto <model_location>/deploy.prototxt --data_type <data_type> --output_dir <output_dir> --input_shape [1,3,227,227]

For object detection using SqueezeNetSSD-5Class model:
Command :

python --framework caffe --input_model <model_location>/SqueezeNetSSD-5Class.caffemodel --input_proto <model_location>/SqueezeNetSSD-5Class.prototxt --data_type <data_type> --output_dir <output_dir>

Where <model_location> is the location where the user downloaded the models, <data_type>is FP32 or FP16 depending on target device, and <output_dir> is the directory where the user wants to store the IR. IR contains .xml format corresponding to the network structure and .bin format corresponding to weights. This .xml should be passed to <PARAM_MODEL_XML> mentioned in the Configuring the Lambda Function section. In the BVLC Alexnet model, the prototxt defines the input shape with batch size 10 by default. In order to use any other batch size, the entire input shape needs to be provided as an argument to the model optimizer. For example, if you want to use batch size 1, you can provide --input_shape [1,3,227,227].

Setting AWS Greengrass + Intel OpenVINO Demo

Refer to this setting for Greengrass sample :

AWS Greengrass sample is in :

However, there are some changes in the openvino_toolkit_p_2018.3.343 version of the path that need to be modified (python2):




Related posts

One thought on “Intel OpenVINO Installation Guide with AWS Greengrass setting

Leave a Reply