Arm NN

Jump to: navigation, search

Installation

Arm NN is packaged at science:machinelearning OBS project, and developed at https://github.com/ARM-software/armnn.

You can install ARM-NN from:

Current status

Current (2019-10-01) options enabled on aarch64 armNN:

Tumbleweed Upcoming Leap 15.2 Leap 15.1
NEON support Icon-checked.png Icon-checked.png Icon-checked.png
openCL support (GPU)* Icon-checked.png Icon-checked.png Icon-checked.png
Caffe support Icon-checked.png Icon-checked.png Icon-checked.png
ONNX support Icon-cross.png (WIP) Icon-cross.png (WIP) Icon-cross.png (WIP)
TensorFlow support Icon-checked.png - but boo#1152671 Icon-cross.png (TensorFlow fails to build) Icon-cross.png (TensorFlow fails to build)
TensorFlowLite support Icon-checked.png Icon-cross.png (No flatbuffers package) Icon-cross.png (No flatbuffers package)

* openCL support (GPU) has not been tested on openSUSE yet. According to [1], it requires a GPU with openCL 1.2 (for better performances openCL 2.0 or openCL 1.x + cl_arm_non_uniform_work_group_size). Upstream tests are on Mali GPU.

Tests

SimpleSample

Run SimpleSample and enter a number when prompted (here 458):

 Please enter a number: 
 458
 Your number was 458

Caffe backend

CaffeInception_BN-Armnn

CaffeInception_BN-Armnn example uses a Caffe model on top of ARM-NN for image classification. You need to get the data and the model, so please download:

Arm NN is not able to use this model as is and it should be converted:

  • batch size to be set to 1 (instead of 10)
  • Arm NN does not support all Caffe syntaxes, so some previous neural-network model files require updates to the latest Caffe syntax

So, you need to:

  • Copy deploy.prototxt to deploy_armnn.prototxt and update the file to set the batch size to 1:
 --- models/deploy.prototxt      2019-10-01 13:25:13.502886667 +0000
 +++ models/deploy_armnn.prototxt        2019-10-01 13:38:55.860972787 +0000
 @@ -3,7 +3,7 @@ layer {
 name: "data"
 type: "Input"
 top: "data"
 -  input_param { shape: { dim: 10 dim: 3 dim: 224 dim: 224 } }
 +  input_param { shape: { dim: 1 dim: 3 dim: 224 dim: 224 } }
 }
 layer { 
  • and run the following convert.py script from the 'models/' folder (requires python3-caffe):
 #!/usr/bin/python3
 import caffe
 net = caffe.Net('deploy.prototxt', 'Inception21k.caffemodel', caffe.TEST)
 new_net = caffe.Net('deploy_armnn.prototxt', 'Inception21k.caffemodel', caffe.TEST)
 new_net.save('Inception-BN-batchsize1.caffemodel')

Now, you can run CaffeInception_BN-Armnn --data-dir=data --model-dir=models :

 ArmNN v20190800
 = Prediction values for test #0
 Top(1) prediction is 3694 with value: 0.255735
 Top(2) prediction is 3197 with value: 0.0031263
 Top(3) prediction is 1081 with value: 0.000757725
 Top(4) prediction is 567 with value: 0.000526447
 Top(5) prediction is 559 with value: 9.72124e-05
 Total time for 1 test cases: 0.088 seconds
 Average time per test case: 88.260 ms
 Overall accuracy: 1.000
 Runtime::UnloadNetwork(): Unloaded network with ID: 0

CaffeResNet-Armnn

CaffeResNet-Armnn example uses a Caffe model on top of ARM-NN for image classification. You need to get the data and the model, so please download:

And run CaffeResNet-Armnn --data-dir=data --model-dir=models :

 ArmNN v20190800
 
 = Prediction values for test #0
 Top(1) prediction is 21 with value: 0.466987
 Top(2) prediction is 7 with value: 0.000633067
 Top(3) prediction is 1 with value: 2.17822e-06
 Top(4) prediction is 0 with value: 6.27832e-08
 = Prediction values for test #1
 Top(1) prediction is 2 with value: 0.511024
 Top(2) prediction is 0 with value: 2.7405e-07
 Total time for 2 test cases: 0.205 seconds
 Average time per test case: 102.741 ms
 Overall accuracy: 1.000
 Runtime::UnloadNetwork(): Unloaded network with ID: 0

CaffeMnist-Armnn

CaffeMnist-Armnn example uses a Caffe model on top of ARM-NN for handwritten digits recognition. In this example, this is number 7.

You need to get the data and the model, so please install arm-ml-example:

As CaffeMnist-Armnn requires is slightly different naming, you need to rename the files:

 cp -r /usr/share/armnn-mnist/* /tmp/
 mv /tmp/data/t10k-labels-idx1-ubyte /tmp/data/t10k-labels.idx1-ubyte
 mv /tmp/data/t10k-images-idx3-ubyte /tmp/data/t10k-images.idx3-ubyte

And run CaffeMnist-Armnn --data-dir=/tmp/data/ --model-dir=/tmp/model/:

 ArmNN v20190800
 
 = Prediction values for test #0
 Top(1) prediction is 7 with value: 1
 Top(2) prediction is 0 with value: 0
 = Prediction values for test #1
 Top(1) prediction is 2 with value: 1
 Top(2) prediction is 0 with value: 0
 = Prediction values for test #5
 Top(1) prediction is 1 with value: 1
 Top(2) prediction is 0 with value: 0
 = Prediction values for test #8
 Top(1) prediction is 5 with value: 1
 Top(2) prediction is 0 with value: 0
 = Prediction values for test #9
 Top(1) prediction is 9 with value: 1
 Top(2) prediction is 0 with value: 0
 Total time for 5 test cases: 0.008 seconds
 Average time per test case: 1.569 ms
 Overall accuracy: 1.000
 Runtime::UnloadNetwork(): Unloaded network with ID: 0

You may add -c CpuRef (standard C++) or -c CpuAcc (NEON accelerated) or -c GpuAcc (GPU accelerated, requires openCL) options to select the compute node.

MNIST Caffe example

MNIST Caffe example uses a Caffe model on top of ARM-NN for handwritten digits recognition. In this example, this is number 7.

You must install ARM ML examples and associated data from:

Go to the data folder:

 cd /usr/share/armnn-mnist/

and run mnist_caffe:

 Predicted: 7
 Actual: 7

ONNX backend

OnnxMnist-Armnn

OnnxMnist-Armnn example uses an ONNX model on top of ARM-NN for handwritten digits recognition. In this example, this is number 7.

You need to get the data, so please install arm-ml-example:

And download the model from https://onnxzoo.blob.core.windows.net/models/opset_8/mnist/mnist.tar.gz

As OnnxMnist-Armnn requires is slightly different naming, you need to rename the files:

 cp -r /usr/share/armnn-mnist/* /tmp/
 mv /tmp/data/t10k-labels-idx1-ubyte /tmp/data/t10k-labels.idx1-ubyte
 mv /tmp/data/t10k-images-idx3-ubyte /tmp/data/t10k-images.idx3-ubyte

For the model:

 wget https://onnxzoo.blob.core.windows.net/models/opset_8/mnist/mnist.tar.gz
 tar xzf mnist.tar.gz
 cp mnist/model.onnx /tmp/model/mnist_onnx.onnx 

And run OnnxMnist-Armnn --data-dir=/tmp/data/ --model-dir=/tmp/model/ -i 1:

 ArmNN v20190800
 = Prediction values for test #0
 Top(1) prediction is 7 with value: 28.34
 Top(2) prediction is 3 with value: 9.42895
 Top(3) prediction is 2 with value: 8.64272
 Top(4) prediction is 1 with value: 0.627583
 Top(5) prediction is 0 with value: -1.25672
 Total time for 1 test cases: 0.002 seconds
 Average time per test case: 2.278 ms
 Overall accuracy: 1.000
 Runtime::UnloadNetwork(): Unloaded network with ID: 0

You may add -c CpuRef (standard C++) or -c CpuAcc (NEON accelerated) or -c GpuAcc (GPU accelerated, requires openCL) options to select the compute node.

OnnxMobileNet-Armnn

OnnxMobileNet-Armnn example uses an ONNX model on top of ARM-NN for image classification. In this example, it will look for shark, dog and cat.

You need to get the mobilenetv2 model for ONNX, so:

For the data, you need to download:

  • an image of a shark, rename it shark.jpg and place it to data/ folder.
  • an image of a dog, rename it Dog.jpg and place it to data/ folder.
  • an image of a cat, rename it Cat.jpg and place it to data/ folder.


And run OnnxMobileNet-Armnn --data-dir=data --model-dir=models -i 3:

 ArmNN v20190800
 Performance test running in DEBUG build - results may be inaccurate.
 = Prediction values for test #0
 Top(1) prediction is 273 with value: 16.4625
 Top(2) prediction is 227 with value: 13.9884
 Top(3) prediction is 225 with value: 11.6609
 Top(4) prediction is 168 with value: 11.3706
 Top(5) prediction is 159 with value: 9.35255
 = Prediction values for test #1
 Top(1) prediction is 281 with value: 16.7145
 Top(2) prediction is 272 with value: 5.43621
 Top(3) prediction is 271 with value: 5.3766
 Top(4) prediction is 51 with value: 5.24998
 Top(5) prediction is 24 with value: 2.50436
 = Prediction values for test #2
 Top(1) prediction is 2 with value: 21.4471
 Top(2) prediction is 0 with value: 4.55977
 Total time for 3 test cases: 0.164 seconds
 Average time per test case: 54.651 ms
 Overall accuracy: 0.667
 Runtime::UnloadNetwork(): Unloaded network with ID: 0


You may add -c CpuRef (standard C++) or -c CpuAcc (NEON accelerated) or -c GpuAcc (GPU accelerated, requires openCL) options to select the compute node.

TensorFlow backend

MNIST TensorFlow example

MNIST TensorFlow example uses a TensorFlow model on top of ARM-NN for handwritten digits recognition. In this example, this is number 7.

You must install ARM ML examples (and associated data) from:

Go to the data folder:

 cd /usr/share/armnn-mnist/

and run mnist_tf:

 Predicted: 7
 Actual: 7

TensorFlow Lite backend

TensorFlow Lite examples may print some errors (depending on your images), since your Cat image maybe recognized as a 'Tabby Cat' (label 282) and not the expected 'Tiger Cat' (label 283), see https://github.com/ARM-software/armnn/issues/165#issuecomment-538299546

To run TfLite*-Armnn examples, you need to download the models and extract them to models/ folder:

 # Only the *.tflite files are needed, but more files are in the archives
 wget http://download.tensorflow.org/models/tflite/mnasnet_1.3_224_09_07_2018.tgz
 tar xzf mnasnet_*.tgz
 mv mnasnet_*/ models
 pushd models
 wget http://download.tensorflow.org/models/tflite_11_05_08/inception_v3_quant.tgz
 tar xzf inception_v3_quant.tgz
 wget http://download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224_quant.tgz
 tar xzf mobilenet_v1_1.0_224_quant.tgz
 wget http://download.tensorflow.org/models/tflite_11_05_08/mobilenet_v2_1.0_224_quant.tgz
 tar xzf mobilenet_v2_1.0_224_quant.tgz
 popd

You may also get the labels from the MobileNet V1 archive: https://storage.googleapis.com/download.tensorflow.org/models/tflite/mobilenet_v1_1.0_224_quant_and_labels.zip

For the data, you need to download:

  • an image of a shark, rename it shark.jpg and place it to data/ folder.
  • an image of a dog, rename it Dog.jpg and place it to data/ folder.
  • an image of a cat, rename it Cat.jpg and place it to data/ folder.

TfLiteInceptionV3Quantized-Armnn example

Once you have the models/ and data/ folders ready, you can run TfLiteInceptionV3Quantized-Armnn --data-dir=data --model-dir=models

TfLiteMnasNet-Armnn example

Once you have the models/ and data/ folders ready, you can run TfLiteMnasNet-Armnn --data-dir=data --model-dir=models

TfLiteMobilenetQuantized-Armnn example

Once you have the models/ and data/ folders ready, you can run TfLiteMobilenetQuantized-Armnn --data-dir=data --model-dir=models

TfLiteMobilenetV2Quantized-Armnn example

Once you have the models/ and data/ folders ready, you can run TfLiteMobilenetV2Quantized-Armnn --data-dir=data --model-dir=models