- Boost deep learning performance in computer vision, automatic speech recognition, natural language processing and other common tasks
- Use models trained with popular frameworks like TensorFlow, PyTorch and more
- Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud
This open-source version includes several components: namely Model Optimizer, OpenVINO™ Runtime, Post-Training Optimization Tool, as well as CPU, GPU, GNA, multi device and heterogeneous plugins to accelerate deep learning inference on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from Open Model Zoo, along with 100+ open source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
CPU Processor Requirements
Systems based on Intel® 64 architectures below are supported both as host and target platforms.
- 6th to 13th generation Intel® Core™ processors
- 1st to 4th generation Intel® Xeon® Scalable processors
- Intel® Pentium® processor N4200/5, N3350/5, N3450/5 with Intel® HD Graphics
- Intel Atom® processor with Intel® Streaming SIMD Extensions 4.2 (Intel® SSE4.2)
Newer versions of the operating system kernel may be required for 10th and 11th generation Intel Core processors, 11th generation Intel Core processors S-Series, 12th and 13th generation Intel Core processors, or 4th generation Intel Xeon Scalable processors to support a CPU, GPU, Intel GNA, or hybrid-core with CPU capabilities.
Intel® Gaussian & Neural Accelerator (Intel® GNA)
- Intel® GNA
GPU Processor Supported
- Intel® HD Graphics
- Intel® UHD Graphics
- Intel® Iris® Pro Graphics
- Intel® Iris® Xe Graphics
- Intel® Iris® Xe MAX Graphics
Discrete Graphics Supported
- Intel® Data GPU Flex Series Center (formerly code named Arctic Sound)
- Intel® Arc ™ GPU (formerly code named DG2)
Additional Software Requirements
- GNU Compiler Collection (GCC)*
- Python* 3.7-3.10
Install CMake*, pkg-config and GNU* Dev Tools to build samples. Although the CMake and pkg-config build tools are not required by OpenVINO tools and toolkits, many examples are provided as CMake projects and require CMake to build them. In some cases, pkg-config is necessary to find the libraries needed to complete the application build.
Intel compilers leverage existing GNU build toolchains to provide a complete C/C++ development environment. If your Linux distribution does not include the full set of GNU development tools, you will need to install these tools. To install CMake, pkg-config, opencl, and the GNU development tools on your Linux system, open a terminal session and enter the following commands:
$ sudo zypper update $ sudo zypper --non-interactive install cmake pkg-config \ patterns-devel-C-C++-devel_C_C++ \ opencl-headers ocl-icd-devel opencv-devel \ pugixml-devel patchelf opencl-cpp-headers \ python311-devel ccache nlohmann_json-devel \ ninja scons git git-lfs patchelf fdupes \ rpm-build ShellCheck tbb-devel libva-devel \ snappy-devel ocl-icd-devel \ opencl-cpp-headers opencl-headers \ zlib-devel gflags-devel-static \ protobuf-devel
Verify the installation by displaying the installation location with this command:
$ which cmake pkg-config make gcc g++
One or more of these locations will display:
/usr/bin/cmake /usr/bin/pkg-config /usr/bin/make /usr/bin/gcc /usr/bin/g++
Compiling from sources (.rpm coming soon).
We will begin compiling OpenVINO using source codes extracted directly from GitHub (RPM packages available soon). As a member of openSUSE and Intel Edge Innovator, I am taking the individual initiative of the packaging process and the publication of rpm packages for the openSUSE Linux platform. Thus, soon it will be possible to perform the installation solely through the zypper command.
Download: Github Instruction
Below are the commands to download version 2023.1 (latest release on the date of publication of this text):
$ git clone -b 2023.1.0 https://github.com/openvinotoolkit/openvino.git $ cd openvino && git submodule update --init --recursive
Install python dependency for building python wheels
$ python3 -m pip install -U pip $ python3 -m pip install -r ./src/bindings/python/src/compatibility/openvino/requirements-dev.txt $ python3 -m pip install -r ./src/bindings/python/wheel/requirements-dev.txt
Now we will compile and install openvino with the instructions below:
$ mkdir build && mkdir openvino_dist && cd build $ cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=../openvino_dist \ -DBUILD_SHARED_LIBS=ON -DENABLE_OV_ONNX_FRONTEND=ON \ -DENABLE_OV_PADDLE_FRONTEND=ON -DENABLE_OV_IR_FRONTEND=ON \ -DENABLE_OV_PYTORCH_FRONTEND=ON -DENABLE_OV_IR_FRONTEND=ON \ -DENABLE_OV_TF_FRONTEND=ON -DENABLE_OV_TF_LITE_FRONTEND=ON \ -DENABLE_INTEL_GNA=OFF -DENABLE_PYTHON=ON -DENABLE_WHEEL=ON \ -DPYTHON_EXECUTABLE=`which python3.10` \ -DPYTHON_LIBRARY=/usr/lib64/libpython3.10.so \ -DPYTHON_INCLUDE_DIR=/usr/include/python3.10 .. $ make --jobs=$(nproc --all) $ make install
Install built python wheel for OpenVINO runtime and OpenVINO-dev tools
python3 -m pip install openvino-dev --find-links ../openvino_dist/tools
Quick test for built openvino runtime
Now with OpenVino compiled and installed in the distribution folder, to test it we must initialize the OpenVino development environment with the command below:
# cd ../openvino_dist/ # source ./setupvars.sh [setupvars.sh] OpenVINO environment initialized
Insert the omz path into the environmental variable.
export PATH=$PATH:/home/cabelo/.local/bin export PYTHONPATH=$PYTHONPATH:<openvino_repo>/openvino/bin/intel64/Release/python/ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<openvino_repo>/openvino/bin/intel64/Release/
Now, create model directory and install model optimizer dependency:
$ mkdir ~/ov_models $ pip3 install onnx==1.11.0 protobuf==3.19.4 openvino-dev[pytorch]
Before carrying out the test hello_classification.py, should we download model alexnet with the command below:
$ cd samples/python/hello_classification/ $ omz_downloader --name alexnet ################|| Downloading alexnet ||################ ========== Downloading /opt/intel/openvino_2023.1.0/samples/python/hello_classification/public/alexnet/alexnet.prototxt ... 100%, 3 KB, 18505 KB/s, 0 seconds passed ========== Downloading /opt/intel/openvino_2023.1.0/samples/python/hello_classification/public/alexnet/alexnet.caffemodel ... 100%, 238146 KB, 5134 KB/s, 46 seconds passed ========== Replacing text in /opt/intel/openvino_2023.1.0/samples/python/hello_classification/public/alexnet/alexnet.prototxt
Okay, if everything is working correctly, run the command below to test the classification example in the Python language.
$ python3 hello_classification.py public/alexnet/FP32/alexnet.xml /dados/openvino/banana.jpg CPU [ INFO ] Creating OpenVINO Runtime Core [ INFO ] Reading the model: public/alexnet/FP32/alexnet.xml [ INFO ] Loading the model to the plugin [ INFO ] Starting inference in synchronous mode [ INFO ] Image path: /dados/openvino/banana.jpg [ INFO ] Top 10 results: [ INFO ] class_id probability [ INFO ] -------------------- [ INFO ] 954 0.9988611 [ INFO ] 951 0.0003525 [ INFO ] 950 0.0002846 [ INFO ] 666 0.0002556 [ INFO ] 502 0.0000543 [ INFO ] 945 0.0000491 [ INFO ] 659 0.0000155 [ INFO ] 600 0.0000136 [ INFO ] 953 0.0000134 [ INFO ] 940 0.0000102 [ INFO ] [ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool