How to use the OpenVINO inference engine in QNAP AWS Greengrass?

How to use the OpenVINO inference engine in QNAP AWS Greengrass?

In this tutorial you will learn how to use OpenVINO for perform Inference.

Please note:

  • AWS Greengrass 1.1 (or later) is required.
  • You must be using an Intel-based NAS.
  • Step 1: Set up Lambda functions for AWS IoT Greengrass

    1. Download the sample Lambda function from zip
    2. Refer to the following document for set up: Create and Package a Lambda Function ( Step 5 – 9)

    (Note: Lambda > Functions > Your Function > Function code > Handler need to set up “greengrass_object_detection_sample_ssd.function_handler” as below)





  • Step 5: Set up AWS Greengrass Group

    1. Start the AWS Greengrass service.
    2. Go to “AWS Greengrass Group” > “Settings“.
    3. Go to the “Lambdas” page.
      • Add Lambdas function (Refer to Configure the Lambda Function for AWS IoT Greengrass (Step1-7))
      • Set the UID (number) to 0.
      • Set the GID to 0.
      • Under “Containerization”, select “No container (always)”.
      • Set the Timeout to 25.
      • Add environment values based on the below keys and values:
        LD_LIBRARY_PATH /opt/intel//computer_vision_sdk_2018.5.455/opencv/lib:
        PYTHONPATH /opt/intel/computer_vision_sdk_2018.5.455
        PARAM_INPUT_SOURCE /local_src/Source/person-bicycle-car-detection.mp4 <DATA_DIR>/input.mp4 to be specified by user.
        *File name of input.mp4need
        to same as your NAS filename
        PARAM_DEVICE CPU For CPU, specify `CPU`.
        For GPU, specify `GPU`.
        For FPGA, specify `HETERO:FPGA,CPU`.
        PARAM_CPU_EXTENSION_PATH /opt/intel/computer_vision_sdk_2018.5.455
        PARAM_OUTPUT_DIRECTORY /local_src/Result
        PARAM_NUM_TOP_RESULTS 3 User specified for classification
        (e.g. 1 for top-1 result,
        5 for top-5 results)
    4. Go to the “Subscriptions” page.
    5. Click “Add your first Subscription“.
    6. In “Select a source” choose “Select“.
    7. In the “Lambdas” tab, select “greengrass_object_detection” as the source.
    8. Under “Select a target“, choose “IoT Cloud“.
    9. Click “Next“.
    10. Under “Topic filter“, enter “intel/faas/ssd”.
    11. Click “Next“.
    12. Click “Finish


  • Step 6: Start ML model & Lambda function deployment

    1. Click “Actions” > “Deploy
    2. Click “Automatic detection
    3. If deployment is successful then the status will change to “Successfully completed“. If deployment fails you can click to view the reason why.


  • Step 7: Run inference and view results

    1. Inference will be run automatically after deployment. You can view the status in the AWS Greengrass Log page.If you receive the following message, your NAS does not support GPU-based inference. You must switch to using the CPU.
      “[FATAL],Failed to import handler function “greengrass_object_detection_sample_ssd.function_handler” due to exception: failed to create engine: clGetPlatformIDs error -1001″
    2. Open File Station.
    3. Identified images and results will be available under the “AWSGG/Result” folder.



Related posts

Leave a Reply