A.I. at the Edge (OpenVINO)

As shown below a typical machine learning project pipeline consists of import, build, train and deployment.

Import

In this part we usually acquire the data from different data sources and sensors. Often the data is not in the format that can be used directly to create a model but rather we need to convert it into a usable format to create a dataset.

Build

In this step we usually start creating models based on the datasets which we have developed in the previous step of import.

Train

Build and Train steps usually go hand in hand. Here in this step, we optimize our model to achieve the required results with acceptable efficiency.

Deploy

In this part we deploy the model on a particular edge device. Depending on the architecture of the edge device certain level of optimizations are required. Therefore, it is of crucial importance to select the right framework/libraries to ensure your model is optimized for that particular hardware and can extract and use the computational power available in the most efficient way.

Intel’s OpenVINO

In a typical A.I. at the Edge implementation among others arguably three most common parameters are Computational Power, Cost and Performance.

Based on the requirements of the application we need to clearly identify the required computational power of the edge device. Since computation power directly relates to the cost and performance there is always a tradeoff. The trick is to find an optimal solution that is cost effective but still provide the performance that is required for a particular application. Different hardware vendors have introduced tools for the developer to achieve the optimal solution while implementing A.I. application on their hardware. Intel has introduced a toolkit OpenVINO (Open Visual Inference Neural network Optimization) that optimizes the model from various frameworks such as TensorFlow, Caffe etc. for Intel’s hardware. Its heterogenous capability allows to extract the computational power available in a very efficient way.

OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that solve a variety of tasks including emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, and many others. Based on latest generations of artificial neural networks, including Convolutional Neural Networks (CNNs), recurrent and attention-based networks, the toolkit extends computer vision and non-vision workloads across Intel® hardware, maximizing performance. It accelerates applications with high-performance, AI and deep learning inference deployed from edge to cloud. Refer to the official documentation to learn more about Intel’s OpenVINO.

OpenVINO consists mainly of two parts.

  • Model Optimizer – Optimizes your model either from TensorFlow, Caffe etc. for Intel’s hardware
  • Inference Engine – provides right plugins for Intel hardware such as CPU, GPU, VPU and FPGA. Furthermore, it also enables to divide the workload between various hardware (Heterogeneity).

To learn more about the solutions using A.I. at the edge to solve real world use-cases visit Tech Data’s IoT solutions catalogue page here.

Naqqash Abbassi
Lead for AI and Vision
Data and IoT Team
Tech Data Europe