How to use OpenVINO inference engine in QNAP AWS Greengrass?

How to use OpenVINO inference engine in QNAP AWS Greengrass?


For using OpenVINO Inference, you need to upgrade your AWS Greengrass to v1.1. (Coming Soon) In this tutorial you will learn how to use OpenVINO do the Inference, with this new feature, need to use QNAP NAS that combine with Intel CPU. (Refer to AWS Greengrass Release Note for more NAS supported lists for Inference)  

Advanced prepare

  • Setting AWS Greengrass first

  1. Refer to:  How to setup AWS Greengrass on QNAP NAS
  • Setting Lambda function of AWS IoT Greengrass

  1. Download Sample Lambda function: https://github.com/qnap-dev/qnap-qiot-sdks/blob/master/projects/AWSGreengrass-Edge-Analytics-FaaS/greengrass_object_detection_sample_ssd.py
  2. Create and Package a Lambda Function (Step 5-9), put greengrasssdk folder &  greengrass_object_detection_sample_ssd.py to be a zip file and upload to AWS to create Lambda function: https://docs.aws.amazon.com/greengrass/latest/developerguide/create-lambda.html

 

 

  • Model Optimizer and upload to AWS S3

  1. SSH via NAS (How to access QNAP NAS by SSH?)
  2. Enter command and change to AWSGG content:  cd /share/AWSGG
  3. Download sample Models:
    wget https://github.com/intel/Edge-optimized-models/archive/master.zip && unzip master.zip
  4. Enter command to Greengrass: system-docker exec -ti greengrass bash
  5. Change content: cd /local_src/Edge-optimized-models-master/SqueezeNet\ 5-Class\ detection/
  6. Do the IR translation:
    python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo.py –input_model SqueezeNetSSD-5Class.caffemodel –input_proto SqueezeNetSSD-5Class.prototxt –data_type FP32

        • –input_model : Model path
        • –input_proto : prototxt path
        • –data_type : data type (Note: CPU do not support for FP16 data type)
  7. Put SqueezeNetSSD-5Class.bin & SqueezeNetSSD-5Class.xml into zip file (You may go to “File Station” to operate):
  8. Download the file and save it (Other advanced way: Model Optimizer Developer Guide )
  9. Upload zip file to AWS S3: How Do I Upload Files and Folders to an S3 Bucket?

 

  • Sample Video for identifying

Refer to Intel website intel-iot-devkit/sample-videos and download person-bicycle-car-detection.mp4 to below path:/AWSGG/Source

 

Setting AWS Greengrass

  • Add Lambda function

  1. Go to AWS Greengrass Group, change to Lambdas > Click “Add Lambda
     
  2. Click the Lamda you created

  3. Choose Lambda > Click “Finish

 

 

  • Edit Lambda function

  1. Choose the Lambda you created and click “Edit configuration

  2. Refer to below setting:
    a. Basic setting :

    b. Environmental value :

    Key Value Note
    LD_LIBRARY_PATH /opt/intel/computer_vision_sdk/opencv/lib:
    /opt/intel/opencl:/opt/intel/computer_vision_sdk
    /deployment_tools/inference_engine/external/hddl
    /lib:/opt/intel/computer_vision_sdk
    /deployment_tools/inference_engine/external
    /gna/lib:/opt/intel/computer_vision_sdk
    /deployment_tools/inference_engine/external
    /mkltiny_lnx/lib:/opt/intel/computer_vision_
    sdk/deployment_tools/inference_engine
    /external/omp/lib:/opt/intel/computer_vision_sdk
    /deployment_tools/inference_engine
    /lib/ubuntu_16.04/intel64:/opt/intel/computer_vision_sdk
    /deployment_tools/model_optimizer/bin:/opt/intel
    /computer_vision_sdk/openvx/lib:
    PYTHONPATH /opt/intel/computer_vision_sdk/python/python2.7:
    /opt/intel/computer_vision_sdk/python/python2.7
    /ubuntu16:/opt/intel/computer_vision_sdk/
    deployment_tools/model_optimizer
    PARAM_MODEL_XML /greengrass-input-files/SqueezeNetSSD-5Class.xml <MODEL_DIR>/<IR.xml>, where <MODEL_DIR> is user specified and contains IR.xml, the Intermediate Representation file from Intel Model Optimizer
    PARAM_INPUT_SOURCE /dest/Source/person-bicycle-car-detection.mp4 <DATA_DIR>/input.mp4 to be specified by user.
    *File name of input.mp4need to same as your NAS filename
    PARAM_DEVICE CPU For CPU, specify CPU. For GPU, specify GPU. For FPGA, specify HETERO:FPGA,CPU.
    PARAM_CPU_EXTENSION_PATH /opt/intel/computer_vision_sdk
    /deployment_tools/inference_engine
    /lib/ubuntu_16.04/intel64/libcpu_
    extension_sse4.so
    PARAM_OUTPUT_DIRECTORY /dest/Result
    PARAM_NUM_TOP_RESULTS 3 User specified for classification sample (e.g. 1 for top-1 result, 5 for top-5 results)

     

     

  3. Click “Update
  4. Go to Resource page, click “Add a local resource
  5. Choose “Add local resource
  6. Refer below table to add a Resource

    Resource name

    Resource type

    Source path

    Destination path

    Group access permission

    Access

    localDirectory

    Volume

    /local_src

    /dest

    Automatically

    Read and write

    OpenVINOPath

    Volume

    /opt/intel/computer_vision_sdk

    Automatically

    Read-Only

    GPU[Optional]

    Device

    /dev/dri/renderD128

    Automatically

    Read and write

  7. Add a machine learning resource

  8. Choose Add machine learning resource
  9. Enter Resource name and choose”Upload a model in S3″  as Model source. Enter the Model from S3 and  /greengrass-input-files. Set up for the Read and write access.
     

 

  • Setting Subscript

 

  1. Go to Greengrass Group page > “Subscriptions” > Click “Add Subscription”
  2. Choose you created Lambda function as a source and IoT cloud as target. Click next.

  3. Enter “intel/faas/ssd” in Topic filter and click next.
  4. Click “Finish”

 

  • Deployment

 

  1. Click “Action” > “Deploy”
      
  2. First deployment click “Automatic detection”
  3. Wait for deployment
  4. Successfully completed
  5. If deployment failed then can go to Deployments page to check the issue.

 

 

  • Verification

 

  1. Go to AWS Greengrass > Logs page > check the status

    Note : If display the following message, it means that the NAS do not support for GPU inference, please change to CPU do the inference: “[FATAL]-lambda_runtime.py:108,Failed to import handler function “greengrass_object_detection_sample_ssd.function_handler” due to exception: failed to create engine: clGetPlatformIDs error -1001″
  2. In AWSGG category > Result folder to check the final files

  3. Go to AWS IoT > Test page >  Enter “intel/faas/ssd” in “Subscription topic” and click “Subscribe to topic”
  4. Check the final result

Reference

 

Related posts

Leave a Reply