Bringing Model Intelligence to the Edge using Intel OpenVino Toolkit and Tensorflow-Lite (Part — 1)

Saurav Solanki
3 min readSep 18, 2020
Road to Edge AI

In this series of blogs, I am gonna show, how we can bring model intelligence from CPU to Raspberry Pi Family using the Intel OpenVino toolkit. I will briefly talk about OpenVino and Tensorflow Lite here.

I will keep it to basics. I am using fashion MNIST dataset to get the intuition behind the complete workflow. We can add on other parts and benchmark the model on device on metrics.

Training model on CPU or GPU is simple and intuitive unlike model on devices. Every Hardware need DL Acceleration as it is matrix multiplication inside and embedded hardware provides toolkit like Intel Processor → Openvino, (Arm & Coral board) → TFLite, Nvidia → TensorRT, etc.

The common problem that we face:

  1. Latency: Delay in getting many critical applications is hazardous like AV. It takes time to complete a round trip with large data flow over limited bandwidth
  2. Reliability: on the internet and other communication network leads to dependency.
  3. Privacy: dealing Private information way from data sources is currently a major concern every gaint company is facing
  4. Cost: CPU/GPU generally consumes more power and highly costly than the edge device.

This is how Edge Computing looks like:

Edge-Edge-Cloud
Fig 1.1: End-Edge-Cloud Architecture

Edge Computing covers cloud, edge and end collectively. One shortcoming overcome by another like storage, training, latency , etc.

End: Place where data is generated

Edge: Computing power near data sources make critical decisions faster and schedule job to send collected data to the cloud at non-peak time.

Cloud Services: Training, hardware specific model conversion,

What is Intel OpenVino Toolkit?

Fig 1.2: Intel OpenVino Toolkit Workflow

The OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applications and solutions that emulate human vision. Based on Convolutional Neural Networks (CNNs), the toolkit extends CV workloads across Intel® hardware, maximizing performance.

  1. Benchmark your model performance irrespective of DL framework: Tensorflow, Pytorch & MXNet framework
  2. Tool like Model Optimizer converts it into Intermediate Representation of Model
  3. Highly optimized Hardware Specific Inference Engine bring faster and lighter model.

What is Tensorflow Lite?

TensorFlow Lite is an open source deep learning framework for arm based hardware.

I will continue this series. There is lot to come with implementation from local to production.

Please give it a clap and feel free to connect with me: My LinkedIn

References:

  1. OpenVino Toolkit:
  2. Convergence of Edge Computing and Deep Learning: A Comprehensive Survey
  3. Tensorflow Lite

--

--